Markus Armbruster <arm...@redhat.com> writes: > Another lose end: event COMMAND_DROPPED seems to lack test coverage.
Hmm, dropping commands serves to limit the request queue. What limits the response queue? Before OOB, the monitor read at most one command, and wrote its response with monitor_puts(). For input, this leaves queueing to the kernel: if the client sends commands faster than the server can execute them, eventually the kernel refuses to buffer more, and the client's send either blocks or fails with EAGAIN. Output is a mess. monitor_puts() uses an output buffer. It tries to flush at newline. Issues: * If flushing runs into a partial write, the unwritten remainder remains in the output buffer until the next newline. That newline may take its own sweet time to arrive. Could even lead to deadlocks, where a client awaits complete output before it sends more input. Bug, predates OOB, doesn't block this series. * If the client fails to read, the output buffer can grow without bound. Not a security issue; the client is trusted. Just bad workmanship. OOB doesn't change this for monitors running in the main thread. Only mux chardevs run there. Aside: keeping special case code around just for mux is a really bad idea. We need to get rid of it. For monitors running in an I/O thread, we add another buffer: the response queue. It's drained by monitor_qmp_bh_responder(). I guess that means the response queue is effectively bounded by timely draining. Correct? Buffering twice seems silly, but that could be addressed in follow-up patches.