On 8/1/25 10:42, Igor Mammedov wrote:
looking at the code, it seems we are always hold BQL when setting
exit_request.
While the BQL is taken, it is for other reasons (e.g. because of
cpu->halt_cond).
In this case it's set and read from within the same thread so it's okay.
it matches with similar pattern:
/* Read cpu->exit_request before KVM_RUN reads run->immediate_exit.
* Matching barrier in kvm_eat_signals.
*/
smp_rmb();
run_ret = kvm_vcpu_ioctl(cpu, KVM_RUN, 0);
to be on the safe side, this is preserving barrier that BQL has provided before
I can drop it if it's not really needed.
That comment is wrong... The actual pairing here is with cpu_exit(),
though the logic of cpu_exit() is messed up and only fully works for
TCG, and immediate_exit does not matter at all. I'll clean it up and
write a comment.
A correct ordering would be:
(a) store other flags that will be checked if cpu->exit_request is 1
(b) cpu_exit(): store-release cpu->exit_request
(c) cpu_interrupt(): store-release cpu->interrupt_request
- broadcast cpu->halt_cond if needed; right now it's done always in
qemu_cpu_kick()
>>> now you can release the BQL
(d) do the accelerator-specific kick (e.g. write icount_decr for TCG,
pthread_kill for KVM, etc.)
The other side then does the checks in the opposite direction:
(d) the accelerator's execution loop exits thanks to the kick
(c) then check cpu->interrupt_request - any work that's needed here may
take the BQL or not, and may set cpu->exit_request
(b) then check cpu->exit_request to see if it should do slow path work
(a) then (under the BQL) it possibly goes to sleep, waiting on
cpu->halt_cond.
cpu->exit_request and cpu->interrupt_request are not a
load-acquire/store-release pair right now, but it should be. Probably
everything is protected one way or the other by the BQL, but it's not clear.
I'll handle cpu->exit_request and leave cpu->interrupt_request to you.
For the sake of this series, please do the following:
- contrarily to what I said in my earlier review, do introduce a
cpu_test_interrupt() now and make it use a load-acquire. There aren't
many occurrences:
accel/tcg/cpu-exec.c: if
(unlikely(qatomic_read(&cpu->interrupt_request))) {
target/alpha/cpu.c: return cs->interrupt_request & (CPU_INTERRUPT_HARD
target/arm/cpu.c: && cs->interrupt_request &
target/arm/hvf/hvf.c: if (cpu->interrupt_request &
(CPU_INTERRUPT_HARD | CPU_INTERRUPT_FIQ)) {
target/avr/cpu.c: return (cs->interrupt_request & (CPU_INTERRUPT_HARD
| CPU_INTERRUPT_RESET))
target/hppa/cpu.c: return cs->interrupt_request & (CPU_INTERRUPT_HARD
| CPU_INTERRUPT_NMI);
target/i386/kvm/kvm.c: if (cpu->interrupt_request &
(CPU_INTERRUPT_NMI | CPU_INTERRUPT_SMI)) {
target/i386/kvm/kvm.c: if (cpu->interrupt_request &
(CPU_INTERRUPT_INIT | CPU_INTERRUPT_TPR)) {
target/i386/nvmm/nvmm-all.c: if (cpu->interrupt_request &
(CPU_INTERRUPT_INIT | CPU_INTERRUPT_TPR)) {
target/i386/whpx/whpx-all.c: cpu->interrupt_request &
(CPU_INTERRUPT_NMI | CPU_INTERRUPT_SMI)) {
target/i386/whpx/whpx-all.c: if (cpu->interrupt_request &
(CPU_INTERRUPT_INIT | CPU_INTERRUPT_TPR)) {
target/microblaze/cpu.c: return cs->interrupt_request &
(CPU_INTERRUPT_HARD | CPU_INTERRUPT_NMI);
target/openrisc/cpu.c: return cs->interrupt_request &
(CPU_INTERRUPT_HARD |
target/rx/cpu.c: return cs->interrupt_request &
- in tcg_handle_interrupt and generic_handle_interrupt, change like this
/* Pairs with load_acquire in cpu_test_interrupt(). */
qatomic_store_release(&cpu->interrupt_request,
cpu->interrupt_request | mask);
I'll take care of properly adding the store-release/load-acquire
for exit_request and removing the unnecessary memory barriers in kvm-all.c.
Paolo
Thanks,
+ }
+ if (release_bql) {
bql_unlock();
}
}
--
2.47.1