On 31/03/20 17:23, Peter Xu wrote: >> Or KVM_MEM_READONLY. > Yeah, I used a new flag because I thought READONLY was a bit tricky to > be used directly here. The thing is IIUC if guest writes to a > READONLY slot then KVM should either ignore the write or trigger an > error which I didn't check, however here what we want to do is to let > the write to fallback to the userspace so it's neither dropped (we > still want the written data to land gracefully on RAM), nor triggering > an error (because the slot is actually writable).
No, writes fall back to userspace with KVM_MEM_READONLY. >> The problem here is also that the GFN is not an unique identifier of the >> QEMU ram_addr_t. However you don't really need to kick all vCPUs out, >> do you? You can protect the dirty ring with its own per-vCPU mutex and >> harvest the pages from the main thread. > I'm not sure I get the point, but just to mention that currently the > dirty GFNs are collected in a standalone thread (in the QEMU series > it's called the reaper thread) rather than in the per vcpu thread > because the KVM_RESET_DIRTY_RINGS is per-vm after all. One major > reason to kick the vcpus is to make sure the hardware cached dirty > GFNs (i.e. PML) are flushed synchronously. But you're referring to KVM kicking vCPUs not qemu_vcpu_kick. Can you just do an iteration of reaping after setting KVM_MEM_READONLY? > I think the whole kick operation is indeed too heavy for this when > with the run_on_cpu() trick, because the thing we want to know (pml > flushing) is actually per-vcpu and no BQL interaction. Do we have/need > a lightweight way to kick one vcpu in synchronous way? I was > wondering maybe something like responding a "sync kick" request in the > vcpu thread right after KVM_RUN ends (when we don't have BQL yet). > Would that make sense? Not synchronously, because anything synchronous is very susceptible to deadlocks. Thanks, Paolo