Hi,
On 27/03/2020 02:34, Stefano Stabellini wrote:
It doesn't take into account the latest state of interrupts on
other vCPUs.
So I think your implementation is going to introduce a deadlock in the
guest. Let's imagine a guest with 2 vCPUs (A and B) with the following
setup:
* The HW SPI 32 is routed to and serviced by vCPU B.
* vCPU A will routinely wait for any pending SPI 32 to be finished
before performing a specific task.
In the current form of the vGIC, vCPU B will not exit to Xen when SPI 32
has been deactivated. Instead, the vCPU will continue to run until an
unrelated trap happen (I/O emulation, IRQs...). Depending on your setup
(think NULL scheduler) this may happen in a really long time (or never).
Until the vCPU B exit to Xen, SPI 32 may be considered as active.
Therefore vCPU A will keep waiting and be block until vCPU B is finally
trapping in Xen.
My example above is basically a cut down version of
__synchronize_hardirq() in Linux. In practice, you may be lucky most of
the times because there will be trap happening time to time. However, it
also means the task you need to perform on vCPU A will be delayed.
So I would not really bet on the trap here. You have two options:
1) Force the vCPU to trap when deactivating an interrupt
2) For the vCPUs to exiting when reading I{S,C}ACTIVER
1) will incur cost on every interrupts which is not great. So I think
your best option is 2) here.
Cheers,
--
Julien Grall