Vladimir Sementsov-Ogievskiy <vsement...@virtuozzo.com> writes: > 24.11.2020 18:44, Vladimir Sementsov-Ogievskiy wrote: >> Hi all! >> >> This patch isn't outdated, it applies on master with a little conflict >> atomic_mb_read -> qatomic_mb_read >> >> We have new series "[PATCH v2 0/2] Increase amount of data for monitor to >> read", but I do think that we've started from wrong point. And actually we >> should start from this old patch. >> >> 10.07.2019 19:36, Dr. David Alan Gilbert wrote: >>> * Markus Armbruster (arm...@redhat.com) wrote: >>>> Did this fall through the cracks? >>>> >>>> Denis Plotnikov <dplotni...@virtuozzo.com> writes: >>>> >>>>> Right now QMP and HMP monitors read 1 byte at a time from the socket, >>>>> which >>>>> is very inefficient. With 100+ VMs on the host this easily reasults in >>>>> a lot of unnecessary system calls and CPU usage in the system. >>>> >>>> Yes, reading one byte at a time is awful. But QMP is control plane; I >>>> didn't expect it to impact system performance. How are you using QMP? >>>> Just curious, not actually opposed to improving QMP efficiency. >>>> >>>>> This patch changes the amount of data to read to 4096 bytes, which matches >>>>> buffer size on the channel level. Fortunately, monitor protocol is >>>>> synchronous right now thus we should not face side effects in reality. >>>>> >>>>> Signed-off-by: Denis V. Lunev <d...@openvz.org> >>>>> Signed-off-by: Denis Plotnikov <dplotni...@virtuozzo.com> >>>>> --- >>>>> include/monitor/monitor.h | 2 +- >>>>> monitor.c | 2 +- >>>>> 2 files changed, 2 insertions(+), 2 deletions(-) >>>>> >>>>> diff --git a/include/monitor/monitor.h b/include/monitor/monitor.h >>>>> index c1b40a9cac..afa1ed34a4 100644 >>>>> --- a/include/monitor/monitor.h >>>>> +++ b/include/monitor/monitor.h >>>>> @@ -14,7 +14,7 @@ extern __thread Monitor *cur_mon; >>>>> #define MONITOR_USE_CONTROL 0x04 >>>>> #define MONITOR_USE_PRETTY 0x08 >>>>> -#define QMP_REQ_QUEUE_LEN_MAX 8 >>>>> +#define QMP_REQ_QUEUE_LEN_MAX 4096 >>>> >>>> This looks suspicious. It's a request count, not a byte count. Can you >>>> explain what led you to change it this way? >> >> I can explain, because that was my idea: >> >> It's a hack to not overflow the queue. With just the second hunk we overflow >> it and assertion fails.
I can't see offhand how that happens. Got a reproducer for me? >> So, I decided that if we allow to read up to 4096, we will not add more than >> 4096 commands simultaneously. And that works.. Uff! >> Still, now I don't think it's enough: who guarantee that we will not read >> new commands when queue is half-full? >> >> I think, that we just need two limits: QUEUE_SOFT_LEN an QUEUE_HARD_LEN >> (suggest better names). So, when QUEUE_SOFT_LEN is reached we do suspend the >> monitor (like when queue is full currently). And in monitor_can_read() we >> allow to read (QUEUE_HARD_LEN - QUEUE_SOFT_LEN). In this way queue will >> never overflow the QUEUE_HARD_LEN. I'm not sure I understand. Maybe I will once I understand the queue overflow. Or revised patches. >> >> Also, what is the minimum character length of the command? I just decided >> that better safe than sorry, considering one character as possible full >> command. What is the smallest command which parser will parse? Is it {} ? Or >> may be {"execute":""} ? We can use this knowledge, to understand how many >> commands we should be prepared to accept, when we allow N characters in >> monitor_can_read(). So, 4096 is probably too much for QMP_REQ_QUEUE_LEN_MAX. I'm supicious of solutions that tie the request queue length to the read buffer size. Maybe my suspicions will dissipate once I understand the queue overflow. Now let me answer your question. The queue is fed by handle_qmp_command(). It gets called by the JSON parser for each complete JSON value and for each parse error. When the JSON value is a JSON object with an 'exec-oob' key and no 'execute' key, handle_qmp_command() bypasses the queue. Anything else goes into the queue. The shortest JSON values are 0, 1, ..., 9. A sequence of them needs to be separated by whitespace. N characters of input can therefore produce up to ceil(N/2) JSON values. Input that makes the parser report one error per input character exists: "}}}}}" produces one parse error for each '}'. I believe every parse error should consume at least one input character, but we'd have to double-check before we rely on it. >> >>>> >>>>> bool monitor_cur_is_qmp(void); >>>>> diff --git a/monitor.c b/monitor.c >>>>> index 4807bbe811..a08e020b61 100644 >>>>> --- a/monitor.c >>>>> +++ b/monitor.c >>>>> @@ -4097,7 +4097,7 @@ static int monitor_can_read(void *opaque) >>>>> { >>>>> Monitor *mon = opaque; >>>>> - return !atomic_mb_read(&mon->suspend_cnt); >>>>> + return !atomic_mb_read(&mon->suspend_cnt) ? 4096 : 0; >>>>> } >>>>> /* >>>> >>>> The ramifications are not obvious to me. I think I need to (re-)examine >>>> how QMP reads input, with special consideration to its OOB feature. >>> >>> Yeh that was the bit that worried me; I also wondered what happens with >>> monitor_suspend and things like fd passing; enough to make it >>> non-obvious to me. >>> >> >> OK, I don't have answers.. >> >> Markus, David, could you please help? Or, what to do? We of course don't >> have enough expertise to prove that patch will not break any feature in the >> whole monitor subsystem. >> >> I can say that it just works, and we live with it for years (and our >> customers too), and tests pass. Still, I don't think that we use OOB >> feature. I remember some problems with it, when RHEL version which we were >> based on included some OOB feature patches, but not some appropriate fixes.. >> But it was long ago. >> >> If nobody can say that patch is wrong, maybe, we can just apply it? We'll >> have the whole release cycle to catch any bugs and revert the commit at any >> time. In this way we only have a question about QMP_REQ_QUEUE_LEN_MAX, which >> is quite simple. >> > > > ping here, as a replacement for "[PATCH v3 0/5] Increase amount of data for > monitor to read" > > If no better idea, I'll make what I propose above (with two limits) at some > good moment :) Digest what I wrote above, then tell me how you'd like to proceed.