From: Philippe Mathieu-Daudé <[email protected]> Similarly to 1d78a3c3ab8 for KVM, wrap hv_vcpu_run() with cpu_exec_start/end(), so that the accelerator can perform pending operations while all vCPUs are quiescent. See also explanation in commit c265e976f46 ("cpus-common: lock-free fast path for cpu_exec_start/end").
Signed-off-by: Philippe Mathieu-Daudé <[email protected]> Reviewed-by: Richard Henderson <[email protected]> Signed-off-by: Peter Maydell <[email protected]> --- target/arm/hvf/hvf.c | 2 ++ target/i386/hvf/hvf.c | 4 ++++ 2 files changed, 6 insertions(+) diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c index 79861dcacf9..c882f4c89cf 100644 --- a/target/arm/hvf/hvf.c +++ b/target/arm/hvf/hvf.c @@ -2026,7 +2026,9 @@ int hvf_arch_vcpu_exec(CPUState *cpu) } bql_unlock(); + cpu_exec_start(cpu); r = hv_vcpu_run(cpu->accel->fd); + cpu_exec_end(cpu); bql_lock(); switch (r) { case HV_SUCCESS: diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c index 28d98659ec2..16febbac48f 100644 --- a/target/i386/hvf/hvf.c +++ b/target/i386/hvf/hvf.c @@ -992,9 +992,13 @@ int hvf_arch_vcpu_exec(CPUState *cpu) return EXCP_HLT; } + cpu_exec_start(cpu); + hv_return_t r = hv_vcpu_run_until(cpu->accel->fd, HV_DEADLINE_FOREVER); assert_hvf_ok(r); + cpu_exec_end(cpu); + ret = hvf_handle_vmexit(cpu); } while (ret == 0); -- 2.43.0
