On 09/23/2016 12:31 AM, Paolo Bonzini wrote:
This will serve as the base for async_safe_run_on_cpu. Because
start_exclusive uses CPU_FOREACH, merge exclusive_lock with
qemu_cpu_list_lock: together with a call to exclusive_idle (via
cpu_exec_start/end) in cpu_list_add, this protects exclusive
This will serve as the base for async_safe_run_on_cpu. Because
start_exclusive uses CPU_FOREACH, merge exclusive_lock with
qemu_cpu_list_lock: together with a call to exclusive_idle (via
cpu_exec_start/end) in cpu_list_add, this protects exclusive work
against concurrent CPU addition and removal.
This will serve as the base for async_safe_run_on_cpu.
Reviewed-by: Alex Bennée
Signed-off-by: Paolo Bonzini
---
bsd-user/main.c | 17 ---
cpus-common.c | 82 +++
cpus.c| 2
This will serve as the base for async_safe_run_on_cpu.
Reviewed-by: Alex Bennée
Signed-off-by: Paolo Bonzini
---
bsd-user/main.c | 17 ---
cpus-common.c | 82 +++
cpus.c| 2