labath created this revision. labath added a reviewer: clayborg. labath added a subscriber: lldb-commits. Herald added subscribers: danalbert, tberghammer.
This change adds the ability to the client to send multiple packets without waiting for the response to the first one. The individual senders use a queue to register themselves when they have sent a packet. When they reach the front of the queue it's their turn to read a packet. This does not change the protocol in any way -- a well-behaved stub which just sits in a ReadPacketAndSendResponse() loop will not notice any difference. It does however make a huge difference on connections with a non-negligible latency (anything over 10ms certainly fits the bill), because we will have multiple packets in flight at the same time. The queueing system avoids the need for a separate thread doing the reading/writing and should add no overhead to the common case of sequential packet streams. No code except the unit test actually uses the new functionality yet, as the default is for all senders to acquire an exclusive connection lock. Any packets using this will need to be enabled on a one-off basis, making sure that there can be no bad interactions between other packets which can get sent concurrently. Some obvious cases where this can *not* be enabled are: - AckMode: when using unreliable connections, we can never be sure that the other side has recieved our packet and we may need to do retransmissions. - packets which need to be sent as an atomic sequence (e.g. if we don't have thread suffix enabled, then we need to send Hg plus another packet together). The first packet which I would try to enable is qModuleInfo, which is a read-only packet with no side-effects, and we need to send a lot of them (over 150 when attaching to a typical Android application), so it can give a big speed boost there. However, I can see this being useful in other places as well (parallel thread backtrace computation?). To make the code thread-sanitizer clean, I've needed to add a lock to the packet history class, as now we can be sending and receiving a packet simultaneously. ---------- WIP ---------- This code does not actually work yet (apart from the unittests), because the read/write lock does not like to be locked recursively. I'll need to weed those out first (which should be pretty orthogonal to the changes here). ---------- WIP ---------- https://reviews.llvm.org/D22914 Files: source/Plugins/Process/gdb-remote/GDBRemoteClientBase.cpp source/Plugins/Process/gdb-remote/GDBRemoteClientBase.h source/Plugins/Process/gdb-remote/GDBRemoteCommunication.cpp source/Plugins/Process/gdb-remote/GDBRemoteCommunication.h source/Plugins/Process/gdb-remote/GDBRemoteCommunicationClient.cpp source/Plugins/Process/gdb-remote/GDBRemoteRegisterContext.cpp source/Plugins/Process/gdb-remote/ProcessGDBRemote.cpp unittests/Process/gdb-remote/GDBRemoteClientBaseTest.cpp
Index: unittests/Process/gdb-remote/GDBRemoteClientBaseTest.cpp =================================================================== --- unittests/Process/gdb-remote/GDBRemoteClientBaseTest.cpp +++ unittests/Process/gdb-remote/GDBRemoteClientBaseTest.cpp @@ -229,6 +229,7 @@ { StringExtractorGDBRemote continue_response, async_response, response; const bool send_async = true; + const GDBRemoteClientBase::Lock::Kind send_kind = GDBRemoteClientBase::Lock::Exclusive; ContinueFixture fix; if (HasFailure()) return; @@ -243,10 +244,11 @@ event_sp); // Sending without async enabled should fail. - ASSERT_EQ(PacketResult::ErrorSendFailed, fix.client.SendPacketAndWaitForResponse("qTest1", response, !send_async)); + ASSERT_EQ(PacketResult::ErrorSendFailed, + fix.client.SendPacketAndWaitForResponse("qTest1", response, send_kind, !send_async)); std::future<PacketResult> async_result = std::async(std::launch::async, [&] { - return fix.client.SendPacketAndWaitForResponse("qTest2", async_response, send_async); + return fix.client.SendPacketAndWaitForResponse("qTest2", async_response, send_kind, send_async); }); // First we'll get interrupted. @@ -307,6 +309,7 @@ { StringExtractorGDBRemote continue_response, async_response, response; const bool send_async = true; + const GDBRemoteClientBase::Lock::Kind send_kind = GDBRemoteClientBase::Lock::Exclusive; ContinueFixture fix; if (HasFailure()) return; @@ -338,7 +341,7 @@ // Packet stream should remain synchronized. std::future<PacketResult> send_result = std::async(std::launch::async, [&] { - return fix.client.SendPacketAndWaitForResponse("qTest", async_response, !send_async); + return fix.client.SendPacketAndWaitForResponse("qTest", async_response, send_kind, !send_async); }); ASSERT_EQ(PacketResult::Success, fix.server.GetPacket(response)); ASSERT_EQ("qTest", response.GetStringRef()); @@ -396,3 +399,81 @@ ASSERT_TRUE(async_result.get()); ASSERT_EQ(eStateInvalid, continue_state.get()); } + +TEST(GDBRemoteClientBaseTest, SendTwoPacketsInterleaved) +{ + StringExtractorGDBRemote async_response1, async_response2, response1, response2; + const GDBRemoteClientBase::Lock::Kind send_kind = GDBRemoteClientBase::Lock::Shared; + ContinueFixture fix; + if (HasFailure()) + return; + + std::future<PacketResult> async_result1 = std::async(std::launch::async, [&] { + return fix.client.SendPacketAndWaitForResponse("qTest1", async_response1, send_kind); + }); + + std::future<PacketResult> async_result2 = std::async(std::launch::async, [&] { + return fix.client.SendPacketAndWaitForResponse("qTest2", async_response2, send_kind); + }); + + // Make sure we can get both requests before we send out the response. The order in which + // they come is non-deterministic. + ASSERT_EQ(PacketResult::Success, fix.server.GetPacket(response1)); + ASSERT_TRUE(response1.GetStringRef() == "qTest1" || response1.GetStringRef() == "qTest2"); + ASSERT_EQ(PacketResult::Success, fix.server.GetPacket(response2)); + ASSERT_TRUE(response2.GetStringRef() == "qTest1" || response2.GetStringRef() == "qTest2"); + ASSERT_NE(response1.GetStringRef(), response2.GetStringRef()); + + // Send both responses (in the correct order). + ASSERT_EQ(PacketResult::Success, fix.server.SendPacket(response1.GetStringRef() + "X")); + ASSERT_EQ(PacketResult::Success, fix.server.SendPacket(response2.GetStringRef() + "X")); + + // And make sure they get received. + ASSERT_EQ(PacketResult::Success, async_result1.get()); + ASSERT_EQ("qTest1X", async_response1.GetStringRef()); + ASSERT_EQ(PacketResult::Success, async_result2.get()); + ASSERT_EQ("qTest2X", async_response2.GetStringRef()); +} + +TEST(GDBRemoteClientBaseTest, SendManyPacketsStress) +{ + StringExtractorGDBRemote async_response1, async_response2, response1, response2; + const GDBRemoteClientBase::Lock::Kind send_kind = GDBRemoteClientBase::Lock::Shared; + ContinueFixture fix; + if (HasFailure()) + return; + + // Fire up the senders. + std::vector<std::thread> packet_threads; + for (unsigned i = 0; i < 4; ++i) + { + packet_threads.emplace_back([i, &fix] { + std::ostringstream packet; + packet << "qTest" << i; + StringExtractorGDBRemote response; + for (unsigned j = 0; j < 10; ++j) + { + ASSERT_EQ(PacketResult::Success, + fix.client.SendPacketAndWaitForResponse(packet.str(), response, send_kind)); + ASSERT_EQ(packet.str(), response.GetStringRef()); + } + }); + } + + // Our "server" will just mirror the packets back at the senders. + std::thread mirror_thread([&] { + PacketResult result; + StringExtractorGDBRemote packet; + while ((result = fix.server.GetPacket(packet)) == PacketResult::Success) + ASSERT_EQ(PacketResult::Success, fix.server.SendPacket(packet.GetStringRef())); + ASSERT_EQ(PacketResult::ErrorDisconnected, result); + }); + + // Let the senders finish. + for (std::thread &t : packet_threads) + t.join(); + + // Close the client connection so the server thread can exit. + fix.client.Disconnect(); + mirror_thread.join(); +} Index: source/Plugins/Process/gdb-remote/ProcessGDBRemote.cpp =================================================================== --- source/Plugins/Process/gdb-remote/ProcessGDBRemote.cpp +++ source/Plugins/Process/gdb-remote/ProcessGDBRemote.cpp @@ -528,7 +528,8 @@ const int packet_len = ::snprintf (packet, sizeof(packet), "qRegisterInfo%x", reg_num); assert (packet_len < (int)sizeof(packet)); StringExtractorGDBRemote response; - if (m_gdb_comm.SendPacketAndWaitForResponse(packet, packet_len, response, false) == GDBRemoteCommunication::PacketResult::Success) + if (m_gdb_comm.SendPacketAndWaitForResponse(llvm::StringRef(packet, packet_len), response) == + GDBRemoteCommunication::PacketResult::Success) { response_type = response.GetResponseType(); if (response_type == StringExtractorGDBRemote::eResponse) @@ -1143,7 +1144,7 @@ for (size_t idx = 0; idx < num_cmds; idx++) { StringExtractorGDBRemote response; - m_gdb_comm.SendPacketAndWaitForResponse (GetExtraStartupCommands().GetArgumentAtIndex(idx), response, false); + m_gdb_comm.SendPacketAndWaitForResponse(GetExtraStartupCommands().GetArgumentAtIndex(idx), response); } return error; } @@ -1630,7 +1631,7 @@ { // Send vStopped StringExtractorGDBRemote response; - m_gdb_comm.SendPacketAndWaitForResponse("vStopped", response, false); + m_gdb_comm.SendPacketAndWaitForResponse("vStopped", response); // OK represents end of signal list if (response.IsOKResponse()) @@ -4996,7 +4997,8 @@ packet.PutCStringAsRawHex8(file_path.c_str()); StringExtractorGDBRemote response; - if (m_gdb_comm.SendPacketAndWaitForResponse(packet.GetString().c_str(), response, false) != GDBRemoteCommunication::PacketResult::Success) + if (m_gdb_comm.SendPacketAndWaitForResponse(packet.GetString(), response) != + GDBRemoteCommunication::PacketResult::Success) return Error("Sending qFileLoadAddress packet failed"); if (response.IsErrorResponse()) @@ -5371,7 +5373,8 @@ const char *packet_cstr = command.GetArgumentAtIndex(0); bool send_async = true; StringExtractorGDBRemote response; - process->GetGDBRemote().SendPacketAndWaitForResponse(packet_cstr, response, send_async); + process->GetGDBRemote().SendPacketAndWaitForResponse(packet_cstr, response, + GDBRemoteClientBase::Lock::Exclusive, send_async); result.SetStatus (eReturnStatusSuccessFinishResult); Stream &output_strm = result.GetOutputStream(); output_strm.Printf (" packet: %s\n", packet_cstr); @@ -5430,7 +5433,8 @@ bool send_async = true; StringExtractorGDBRemote response; - process->GetGDBRemote().SendPacketAndWaitForResponse(packet_cstr, response, send_async); + process->GetGDBRemote().SendPacketAndWaitForResponse(packet_cstr, response, + GDBRemoteClientBase::Lock::Exclusive, send_async); result.SetStatus (eReturnStatusSuccessFinishResult); Stream &output_strm = result.GetOutputStream(); output_strm.Printf (" packet: %s\n", packet_cstr); Index: source/Plugins/Process/gdb-remote/GDBRemoteRegisterContext.cpp =================================================================== --- source/Plugins/Process/gdb-remote/GDBRemoteRegisterContext.cpp +++ source/Plugins/Process/gdb-remote/GDBRemoteRegisterContext.cpp @@ -395,7 +395,7 @@ reg_info->byte_size, // dst length m_reg_data.GetByteOrder())) // dst byte order { - GDBRemoteClientBase::Lock lock(gdb_comm, false); + GDBRemoteClientBase::Lock lock(gdb_comm); if (lock) { const bool thread_suffix_supported = gdb_comm.GetThreadSuffixSupported(); @@ -564,7 +564,7 @@ const bool use_g_packet = gdb_comm.AvoidGPackets ((ProcessGDBRemote *)process) == false; - GDBRemoteClientBase::Lock lock(gdb_comm, false); + GDBRemoteClientBase::Lock lock(gdb_comm); if (lock) { SyncThreadState(process); @@ -673,7 +673,7 @@ const bool use_g_packet = gdb_comm.AvoidGPackets ((ProcessGDBRemote *)process) == false; StringExtractorGDBRemote response; - GDBRemoteClientBase::Lock lock(gdb_comm, false); + GDBRemoteClientBase::Lock lock(gdb_comm); if (lock) { const bool thread_suffix_supported = gdb_comm.GetThreadSuffixSupported(); Index: source/Plugins/Process/gdb-remote/GDBRemoteCommunicationClient.cpp =================================================================== --- source/Plugins/Process/gdb-remote/GDBRemoteCommunicationClient.cpp +++ source/Plugins/Process/gdb-remote/GDBRemoteCommunicationClient.cpp @@ -253,7 +253,7 @@ GDBRemoteCommunication::ScopedTimeout timeout (*this, std::max (old_timeout, minimum_timeout)); StringExtractorGDBRemote response; - if (SendPacketAndWaitForResponse("QStartNoAckMode", response, false) == PacketResult::Success) + if (SendPacketAndWaitForResponse("QStartNoAckMode", response) == PacketResult::Success) { if (response.IsOKResponse()) { @@ -274,7 +274,7 @@ m_supports_threads_in_stop_reply = eLazyBoolNo; StringExtractorGDBRemote response; - if (SendPacketAndWaitForResponse("QListThreadsInStopReply", response, false) == PacketResult::Success) + if (SendPacketAndWaitForResponse("QListThreadsInStopReply", response) == PacketResult::Success) { if (response.IsOKResponse()) m_supports_threads_in_stop_reply = eLazyBoolYes; @@ -290,7 +290,7 @@ m_attach_or_wait_reply = eLazyBoolNo; StringExtractorGDBRemote response; - if (SendPacketAndWaitForResponse("qVAttachOrWaitSupported", response, false) == PacketResult::Success) + if (SendPacketAndWaitForResponse("qVAttachOrWaitSupported", response) == PacketResult::Success) { if (response.IsOKResponse()) m_attach_or_wait_reply = eLazyBoolYes; @@ -310,7 +310,7 @@ m_prepare_for_reg_writing_reply = eLazyBoolNo; StringExtractorGDBRemote response; - if (SendPacketAndWaitForResponse("qSyncThreadStateSupported", response, false) == PacketResult::Success) + if (SendPacketAndWaitForResponse("qSyncThreadStateSupported", response) == PacketResult::Success) { if (response.IsOKResponse()) m_prepare_for_reg_writing_reply = eLazyBoolYes; @@ -408,9 +408,7 @@ } StringExtractorGDBRemote response; - if (SendPacketAndWaitForResponse(packet.GetData(), - response, - /*send_async=*/false) == PacketResult::Success) + if (SendPacketAndWaitForResponse(packet.GetData(), response) == PacketResult::Success) { const char *response_cstr = response.GetStringRef().c_str(); if (::strstr (response_cstr, "qXfer:auxv:read+")) @@ -508,7 +506,7 @@ { StringExtractorGDBRemote response; m_supports_thread_suffix = eLazyBoolNo; - if (SendPacketAndWaitForResponse("QThreadSuffixSupported", response, false) == PacketResult::Success) + if (SendPacketAndWaitForResponse("QThreadSuffixSupported", response) == PacketResult::Success) { if (response.IsOKResponse()) m_supports_thread_suffix = eLazyBoolYes; @@ -528,7 +526,7 @@ m_supports_vCont_C = eLazyBoolNo; m_supports_vCont_s = eLazyBoolNo; m_supports_vCont_S = eLazyBoolNo; - if (SendPacketAndWaitForResponse("vCont?", response, false) == PacketResult::Success) + if (SendPacketAndWaitForResponse("vCont?", response) == PacketResult::Success) { const char *response_cstr = response.GetStringRef().c_str(); if (::strstr (response_cstr, ";c")) @@ -591,8 +589,8 @@ snprintf(packet, sizeof(packet), "p0;thread:%" PRIx64 ";", tid); else snprintf(packet, sizeof(packet), "p0"); - - if (SendPacketAndWaitForResponse(packet, response, false) == PacketResult::Success) + + if (SendPacketAndWaitForResponse(packet, response) == PacketResult::Success) { if (response.IsNormalResponse()) m_supports_p = eLazyBoolYes; @@ -611,7 +609,7 @@ { StringExtractorGDBRemote response; response.SetResponseValidatorToJSON(); - if (SendPacketAndWaitForResponse("jThreadsInfo", response, false) == PacketResult::Success) + if (SendPacketAndWaitForResponse("jThreadsInfo", response) == PacketResult::Success) { if (response.IsUnsupportedResponse()) { @@ -634,7 +632,7 @@ { StringExtractorGDBRemote response; m_supports_jThreadExtendedInfo = eLazyBoolNo; - if (SendPacketAndWaitForResponse("jThreadExtendedInfo:", response, false) == PacketResult::Success) + if (SendPacketAndWaitForResponse("jThreadExtendedInfo:", response) == PacketResult::Success) { if (response.IsOKResponse()) { @@ -652,7 +650,7 @@ { StringExtractorGDBRemote response; m_supports_jLoadedDynamicLibrariesInfos = eLazyBoolNo; - if (SendPacketAndWaitForResponse("jGetLoadedDynamicLibrariesInfos:", response, false) == PacketResult::Success) + if (SendPacketAndWaitForResponse("jGetLoadedDynamicLibrariesInfos:", response) == PacketResult::Success) { if (response.IsOKResponse()) { @@ -670,7 +668,7 @@ { StringExtractorGDBRemote response; m_supports_jGetSharedCacheInfo = eLazyBoolNo; - if (SendPacketAndWaitForResponse("jGetSharedCacheInfo:", response, false) == PacketResult::Success) + if (SendPacketAndWaitForResponse("jGetSharedCacheInfo:", response) == PacketResult::Success) { if (response.IsOKResponse()) { @@ -690,7 +688,7 @@ m_supports_x = eLazyBoolNo; char packet[256]; snprintf (packet, sizeof (packet), "x0,0"); - if (SendPacketAndWaitForResponse(packet, response, false) == PacketResult::Success) + if (SendPacketAndWaitForResponse(packet, response) == PacketResult::Success) { if (response.IsOKResponse()) m_supports_x = eLazyBoolYes; @@ -706,7 +704,7 @@ std::string &response_string ) { - Lock lock(*this, false); + Lock lock(*this); if (!lock) { Log *log (ProcessGDBRemoteLog::GetLogIfAnyCategoryIsSet (GDBR_LOG_PROCESS | GDBR_LOG_PACKETS)); @@ -1098,7 +1096,7 @@ m_qGDBServerVersion_is_valid = eLazyBoolNo; StringExtractorGDBRemote response; - if (SendPacketAndWaitForResponse ("qGDBServerVersion", response, false) == PacketResult::Success) + if (SendPacketAndWaitForResponse("qGDBServerVersion", response) == PacketResult::Success) { if (response.IsNormalResponse()) { @@ -1222,7 +1220,7 @@ { StringExtractorGDBRemote response; std::string packet = "QEnableCompression:type:" + avail_name + ";"; - if (SendPacketAndWaitForResponse (packet.c_str(), response, false) != PacketResult::Success) + if (SendPacketAndWaitForResponse(packet, response) != PacketResult::Success) return; if (response.IsOKResponse()) @@ -1255,7 +1253,7 @@ GDBRemoteCommunicationClient::GetDefaultThreadId (lldb::tid_t &tid) { StringExtractorGDBRemote response; - if (SendPacketAndWaitForResponse("qC",response,false) != PacketResult::Success) + if (SendPacketAndWaitForResponse("qC", response) != PacketResult::Success) return false; if (!response.IsNormalResponse()) @@ -1276,7 +1274,7 @@ { m_qHostInfo_is_valid = eLazyBoolNo; StringExtractorGDBRemote response; - if (SendPacketAndWaitForResponse ("qHostInfo", response, false) == PacketResult::Success) + if (SendPacketAndWaitForResponse("qHostInfo", response) == PacketResult::Success) { if (response.IsNormalResponse()) { @@ -1929,7 +1927,7 @@ GDBRemoteCommunicationClient::GetWorkingDir(FileSpec &working_dir) { StringExtractorGDBRemote response; - if (SendPacketAndWaitForResponse ("qGetWorkingDir", response, false) == PacketResult::Success) + if (SendPacketAndWaitForResponse("qGetWorkingDir", response) == PacketResult::Success) { if (response.IsUnsupportedResponse()) return false; @@ -2135,7 +2133,7 @@ GetHostInfo (); StringExtractorGDBRemote response; - if (SendPacketAndWaitForResponse ("qProcessInfo", response, false) == PacketResult::Success) + if (SendPacketAndWaitForResponse("qProcessInfo", response) == PacketResult::Success) { if (response.IsNormalResponse()) { @@ -2704,7 +2702,7 @@ connection_urls.clear(); StringExtractorGDBRemote response; - if (SendPacketAndWaitForResponse("qQueryGDBServer", response, false) != PacketResult::Success) + if (SendPacketAndWaitForResponse("qQueryGDBServer", response) != PacketResult::Success) return 0; StructuredData::ObjectSP data = StructuredData::ParseJSON(response.GetStringRef()); @@ -2920,7 +2918,7 @@ { thread_ids.clear(); - Lock lock(*this, false); + Lock lock(*this); if (lock) { sequence_mutex_unavailable = false; @@ -2977,7 +2975,7 @@ lldb::addr_t GDBRemoteCommunicationClient::GetShlibInfoAddr() { - Lock lock(*this, false); + Lock lock(*this); if (lock) { StringExtractorGDBRemote response; @@ -3474,7 +3472,7 @@ bool GDBRemoteCommunicationClient::ReadRegister(lldb::tid_t tid, uint32_t reg, StringExtractorGDBRemote &response) { - Lock lock(*this, false); + Lock lock(*this); if (lock) { const bool thread_suffix_supported = GetThreadSuffixSupported(); @@ -3488,7 +3486,7 @@ else packet_len = ::snprintf (packet, sizeof(packet), "p%x", reg); assert (packet_len < ((int)sizeof(packet) - 1)); - return SendPacketAndWaitForResponse(packet, response, false) == PacketResult::Success; + return SendPacketAndWaitForResponse(packet, response) == PacketResult::Success; } } else if (Log *log = ProcessGDBRemoteLog::GetLogIfAnyCategoryIsSet(GDBR_LOG_PROCESS | GDBR_LOG_PACKETS)) @@ -3503,7 +3501,7 @@ bool GDBRemoteCommunicationClient::ReadAllRegisters (lldb::tid_t tid, StringExtractorGDBRemote &response) { - Lock lock(*this, false); + Lock lock(*this); if (lock) { const bool thread_suffix_supported = GetThreadSuffixSupported(); @@ -3518,7 +3516,7 @@ else packet_len = ::snprintf (packet, sizeof(packet), "g"); assert (packet_len < ((int)sizeof(packet) - 1)); - return SendPacketAndWaitForResponse(packet, response, false) == PacketResult::Success; + return SendPacketAndWaitForResponse(packet, response) == PacketResult::Success; } } else if (Log *log = ProcessGDBRemoteLog::GetLogIfAnyCategoryIsSet(GDBR_LOG_PROCESS | GDBR_LOG_PACKETS)) @@ -3535,7 +3533,7 @@ return false; m_supports_QSaveRegisterState = eLazyBoolYes; - Lock lock(*this, false); + Lock lock(*this); if (lock) { const bool thread_suffix_supported = GetThreadSuffixSupported(); @@ -3549,7 +3547,7 @@ StringExtractorGDBRemote response; - if (SendPacketAndWaitForResponse(packet, response, false) == PacketResult::Success) + if (SendPacketAndWaitForResponse(packet, response) == PacketResult::Success) { if (response.IsUnsupportedResponse()) { @@ -3583,7 +3581,7 @@ if (m_supports_QSaveRegisterState == eLazyBoolNo) return false; - Lock lock(*this, false); + Lock lock(*this); if (lock) { const bool thread_suffix_supported = GetThreadSuffixSupported(); @@ -3596,8 +3594,8 @@ ::snprintf (packet, sizeof(packet), "QRestoreRegisterState:%u" PRIx64 ";", save_id); StringExtractorGDBRemote response; - - if (SendPacketAndWaitForResponse(packet, response, false) == PacketResult::Success) + + if (SendPacketAndWaitForResponse(packet, response) == PacketResult::Success) { if (response.IsOKResponse()) { @@ -3732,10 +3730,7 @@ << std::hex << offset << "," << std::hex << size; - GDBRemoteCommunication::PacketResult res = - SendPacketAndWaitForResponse( packet.str().c_str(), - chunk, - false ); + GDBRemoteCommunication::PacketResult res = SendPacketAndWaitForResponse(packet.str(), chunk); if ( res != GDBRemoteCommunication::PacketResult::Success ) { err.SetErrorString( "Error sending $qXfer packet" ); @@ -3819,7 +3814,7 @@ if (m_supports_qSymbol && m_qSymbol_requests_done == false) { - Lock lock(*this, false); + Lock lock(*this); if (lock) { StreamString packet; Index: source/Plugins/Process/gdb-remote/GDBRemoteCommunication.h =================================================================== --- source/Plugins/Process/gdb-remote/GDBRemoteCommunication.h +++ source/Plugins/Process/gdb-remote/GDBRemoteCommunication.h @@ -208,27 +208,19 @@ ~History (); - // For single char packets for ack, nack and /x03 void - AddPacket (char packet_char, - PacketType type, - uint32_t bytes_transmitted); + AddPacket(llvm::StringRef packet, PacketType type, uint32_t bytes_transmitted); void - AddPacket (const std::string &src, - uint32_t src_len, - PacketType type, - uint32_t bytes_transmitted); - - void Dump (Stream &strm) const; void Dump (Log *log) const; bool DidDumpToLog () const { + std::lock_guard<std::mutex> lock(m_mutex); return m_dumped_to_log; } @@ -266,6 +258,7 @@ return i % m_packets.size(); } + mutable std::mutex m_mutex; std::vector<Entry> m_packets; uint32_t m_curr_idx; uint32_t m_total_packet_count; Index: source/Plugins/Process/gdb-remote/GDBRemoteCommunication.cpp =================================================================== --- source/Plugins/Process/gdb-remote/GDBRemoteCommunication.cpp +++ source/Plugins/Process/gdb-remote/GDBRemoteCommunication.cpp @@ -69,33 +69,14 @@ } void -GDBRemoteCommunication::History::AddPacket (char packet_char, - PacketType type, - uint32_t bytes_transmitted) +GDBRemoteCommunication::History::AddPacket(llvm::StringRef packet, PacketType type, uint32_t bytes_transmitted) { + std::lock_guard<std::mutex> lock(m_mutex); const size_t size = m_packets.size(); if (size > 0) { const uint32_t idx = GetNextIndex(); - m_packets[idx].packet.assign (1, packet_char); - m_packets[idx].type = type; - m_packets[idx].bytes_transmitted = bytes_transmitted; - m_packets[idx].packet_idx = m_total_packet_count; - m_packets[idx].tid = Host::GetCurrentThreadID(); - } -} - -void -GDBRemoteCommunication::History::AddPacket (const std::string &src, - uint32_t src_len, - PacketType type, - uint32_t bytes_transmitted) -{ - const size_t size = m_packets.size(); - if (size > 0) - { - const uint32_t idx = GetNextIndex(); - m_packets[idx].packet.assign (src, 0, src_len); + m_packets[idx].packet = packet; m_packets[idx].type = type; m_packets[idx].bytes_transmitted = bytes_transmitted; m_packets[idx].packet_idx = m_total_packet_count; @@ -106,6 +87,7 @@ void GDBRemoteCommunication::History::Dump (Stream &strm) const { + std::lock_guard<std::mutex> lock(m_mutex); const uint32_t size = GetNumPacketsInHistory (); const uint32_t first_idx = GetFirstSavedPacketIndex (); const uint32_t stop_idx = m_curr_idx + size; @@ -127,6 +109,7 @@ void GDBRemoteCommunication::History::Dump (Log *log) const { + std::lock_guard<std::mutex> lock(m_mutex); if (log && !m_dumped_to_log) { m_dumped_to_log = true; @@ -206,7 +189,7 @@ const size_t bytes_written = Write (&ch, 1, status, NULL); if (log) log->Printf ("<%4" PRIu64 "> send packet: %c", (uint64_t)bytes_written, ch); - m_history.AddPacket (ch, History::ePacketTypeSend, bytes_written); + m_history.AddPacket(llvm::StringRef(&ch, 1), History::ePacketTypeSend, bytes_written); return bytes_written; } @@ -219,7 +202,7 @@ const size_t bytes_written = Write (&ch, 1, status, NULL); if (log) log->Printf("<%4" PRIu64 "> send packet: %c", (uint64_t)bytes_written, ch); - m_history.AddPacket (ch, History::ePacketTypeSend, bytes_written); + m_history.AddPacket(llvm::StringRef(&ch, 1), History::ePacketTypeSend, bytes_written); return bytes_written; } @@ -278,8 +261,7 @@ log->Printf("<%4" PRIu64 "> send packet: %.*s", (uint64_t)bytes_written, (int)packet_length, packet_data); } - m_history.AddPacket (packet.GetString(), packet_length, History::ePacketTypeSend, bytes_written); - + m_history.AddPacket(packet.GetString(), History::ePacketTypeSend, bytes_written); if (bytes_written == packet_length) { @@ -953,7 +935,7 @@ } } - m_history.AddPacket (m_bytes.c_str(), total_length, History::ePacketTypeRecv, total_length); + m_history.AddPacket(m_bytes, History::ePacketTypeRecv, total_length); // Clear packet_str in case there is some existing data in it. packet_str.clear(); Index: source/Plugins/Process/gdb-remote/GDBRemoteClientBase.h =================================================================== --- source/Plugins/Process/gdb-remote/GDBRemoteClientBase.h +++ source/Plugins/Process/gdb-remote/GDBRemoteClientBase.h @@ -14,6 +14,8 @@ #include <condition_variable> +#include "llvm/Support/RWMutex.h" + namespace lldb_private { namespace process_gdb_remote @@ -33,34 +35,16 @@ HandleStopReply() = 0; }; - GDBRemoteClientBase(const char *comm_name, const char *listener_name); - - bool - SendAsyncSignal(int signo); - - bool - Interrupt(); - - lldb::StateType - SendContinuePacketAndWaitForResponse(ContinueDelegate &delegate, const UnixSignals &signals, - llvm::StringRef payload, StringExtractorGDBRemote &response); - - PacketResult - SendPacketAndWaitForResponse(const char *payload, size_t len, StringExtractorGDBRemote &response, bool send_async) - { - return SendPacketAndWaitForResponse(llvm::StringRef(payload, len), response, send_async); - } - - PacketResult - SendPacketAndWaitForResponse(llvm::StringRef payload, StringExtractorGDBRemote &response, bool send_async); - - bool - SendvContPacket(llvm::StringRef payload, StringExtractorGDBRemote &response); - class Lock { public: - Lock(GDBRemoteClientBase &comm, bool interrupt); + enum Kind + { + Shared, + Exclusive + }; + + Lock(GDBRemoteClientBase &comm, Kind kind = Exclusive, bool interrupt = false); ~Lock(); explicit operator bool() { return m_acquired; } @@ -73,15 +57,40 @@ } private: - std::unique_lock<std::recursive_mutex> m_async_lock; GDBRemoteClientBase &m_comm; + Kind m_kind; bool m_acquired; bool m_did_interrupt; void SyncWithContinueThread(bool interrupt); }; + GDBRemoteClientBase(const char *comm_name, const char *listener_name); + + bool + SendAsyncSignal(int signo); + + bool + Interrupt(); + + lldb::StateType + SendContinuePacketAndWaitForResponse(ContinueDelegate &delegate, const UnixSignals &signals, + llvm::StringRef payload, StringExtractorGDBRemote &response); + + PacketResult + SendPacketAndWaitForResponse(const char *payload, size_t len, StringExtractorGDBRemote &response, bool send_async) + { + return SendPacketAndWaitForResponse(llvm::StringRef(payload, len), response, Lock::Exclusive, send_async); + } + + PacketResult + SendPacketAndWaitForResponse(llvm::StringRef payload, StringExtractorGDBRemote &response, + Lock::Kind kind = Lock::Exclusive, bool send_async = false); + + bool + SendvContPacket(llvm::StringRef payload, StringExtractorGDBRemote &response); + protected: void HandleAsyncStdout(ContinueDelegate &delegate, StringExtractorGDBRemote &payload); @@ -117,13 +126,21 @@ bool m_should_stop; // Whether we should resume after a stop. // end of continue thread synchronization block - // This handles the synchronization between individual async threads. For now they just use a - // simple mutex. - std::recursive_mutex m_async_mutex; + // This handles the synchronization between individual async threads. Either many threads can + // share the connection, or one thread has exclusive access. + llvm::sys::RWMutex m_async_mutex; + + // The data structures for matching up concurrent requests and responses. + std::mutex m_queue_mutex; + std::condition_variable m_queue_cv; + std::queue<const void *> m_queue; bool ShouldStop(const UnixSignals &signals, StringExtractorGDBRemote &response); + PacketResult + ReadAndValidatePacket(StringExtractorGDBRemote &response); + class ContinueLock { public: Index: source/Plugins/Process/gdb-remote/GDBRemoteClientBase.cpp =================================================================== --- source/Plugins/Process/gdb-remote/GDBRemoteClientBase.cpp +++ source/Plugins/Process/gdb-remote/GDBRemoteClientBase.cpp @@ -132,7 +132,7 @@ bool GDBRemoteClientBase::SendAsyncSignal(int signo) { - Lock lock(*this, true); + Lock lock(*this, Lock::Exclusive, true); if (!lock || !lock.DidInterrupt()) return false; @@ -145,17 +145,17 @@ bool GDBRemoteClientBase::Interrupt() { - Lock lock(*this, true); + Lock lock(*this, Lock::Exclusive, true); if (!lock.DidInterrupt()) return false; m_should_stop = true; return true; } GDBRemoteCommunication::PacketResult GDBRemoteClientBase::SendPacketAndWaitForResponse(llvm::StringRef payload, StringExtractorGDBRemote &response, - bool send_async) + Lock::Kind kind, bool send_async) { - Lock lock(*this, send_async); + Lock lock(*this, kind, send_async); if (!lock) { if (Log *log = ProcessGDBRemoteLog::GetLogIfAllCategoriesSet(GDBR_LOG_PROCESS)) @@ -170,25 +170,49 @@ GDBRemoteCommunication::PacketResult GDBRemoteClientBase::SendPacketAndWaitForResponseNoLock(llvm::StringRef payload, StringExtractorGDBRemote &response) { - PacketResult packet_result = SendPacketNoLock(payload.data(), payload.size()); - if (packet_result != PacketResult::Success) - return packet_result; + PacketResult packet_result; + { + std::unique_lock<std::mutex> lock(m_queue_mutex); + + packet_result = SendPacketNoLock(payload.data(), payload.size()); + if (packet_result != PacketResult::Success) + return packet_result; + + // Any unique value would work here. Address of a local variable is guaranteed to satisfy that. + m_queue.push(&packet_result); + m_cv.wait(lock, [this, &packet_result] { return m_queue.front() == &packet_result; }); + } + + // Do the blocking part (reading) with the queue lock released. + packet_result = ReadAndValidatePacket(response); + + { + std::lock_guard<std::mutex> lock(m_queue_mutex); + lldbassert(m_queue.front() == &packet_result); + m_queue.pop(); + } + m_queue_cv.notify_all(); + return packet_result; +} +GDBRemoteCommunication::PacketResult +GDBRemoteClientBase::ReadAndValidatePacket(StringExtractorGDBRemote &response) +{ + PacketResult packet_result; const size_t max_response_retries = 3; for (size_t i = 0; i < max_response_retries; ++i) { packet_result = ReadPacket(response, GetPacketTimeoutInMicroSeconds(), true); // Make sure we received a response if (packet_result != PacketResult::Success) - return packet_result; + break; // Make sure our response is valid for the payload that was sent if (response.ValidateResponse()) - return packet_result; + break; // Response says it wasn't valid Log *log = ProcessGDBRemoteLog::GetLogIfAllCategoriesSet(GDBR_LOG_PACKETS); if (log) - log->Printf("error: packet with payload \"%.*s\" got invalid response \"%s\": %s", int(payload.size()), - payload.data(), response.GetStringRef().c_str(), + log->Printf("error: got invalid response \"%s\": %s", response.GetStringRef().c_str(), (i == (max_response_retries - 1)) ? "using invalid response and giving up" : "ignoring response and waiting for another"); } @@ -203,7 +227,7 @@ log->Printf("GDBRemoteCommunicationClient::%s ()", __FUNCTION__); // we want to lock down packet sending while we continue - Lock lock(*this, true); + Lock lock(*this, Lock::Exclusive, true); if (log) log->Printf("GDBRemoteCommunicationClient::%s () sending vCont packet: %.*s", __FUNCTION__, int(payload.size()), @@ -329,12 +353,17 @@ // GDBRemoteClientBase::Lock // /////////////////////////////// -GDBRemoteClientBase::Lock::Lock(GDBRemoteClientBase &comm, bool interrupt) - : m_async_lock(comm.m_async_mutex, std::defer_lock), m_comm(comm), m_acquired(false), m_did_interrupt(false) +GDBRemoteClientBase::Lock::Lock(GDBRemoteClientBase &comm, Kind kind, bool interrupt) + : m_comm(comm), m_kind(kind), m_acquired(false), m_did_interrupt(false) { SyncWithContinueThread(interrupt); - if (m_acquired) - m_async_lock.lock(); + if (!m_acquired) + return; + + if (m_kind == Shared) + m_comm.m_async_mutex.lock_shared(); + else + m_comm.m_async_mutex.lock(); } void @@ -375,6 +404,12 @@ { if (!m_acquired) return; + + if (m_kind == Shared) + m_comm.m_async_mutex.unlock_shared(); + else + m_comm.m_async_mutex.unlock(); + { std::unique_lock<std::mutex> lock(m_comm.m_mutex); --m_comm.m_async_count;
_______________________________________________ lldb-commits mailing list lldb-commits@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits