On Wed, 27 Jan 2021 12:13:34 +0000,
Shenming Lu <lushenm...@huawei.com> wrote:
> 
> With GICv4.1 and the vPE unmapped, which indicates the invalidation
> of any VPT caches associated with the vPE, we can get the VLPI state
> by peeking at the VPT. So we add a function for this.
> 
> Signed-off-by: Shenming Lu <lushenm...@huawei.com>
> ---
>  arch/arm64/kvm/vgic/vgic-v4.c | 19 +++++++++++++++++++
>  arch/arm64/kvm/vgic/vgic.h    |  1 +
>  2 files changed, 20 insertions(+)
> 
> diff --git a/arch/arm64/kvm/vgic/vgic-v4.c b/arch/arm64/kvm/vgic/vgic-v4.c
> index 66508b03094f..ac029ba3d337 100644
> --- a/arch/arm64/kvm/vgic/vgic-v4.c
> +++ b/arch/arm64/kvm/vgic/vgic-v4.c
> @@ -203,6 +203,25 @@ void vgic_v4_configure_vsgis(struct kvm *kvm)
>       kvm_arm_resume_guest(kvm);
>  }
>  
> +/*
> + * Must be called with GICv4.1 and the vPE unmapped, which
> + * indicates the invalidation of any VPT caches associated
> + * with the vPE, thus we can get the VLPI state by peeking
> + * at the VPT.
> + */
> +void vgic_v4_get_vlpi_state(struct vgic_irq *irq, bool *val)
> +{
> +     struct its_vpe *vpe = &irq->target_vcpu->arch.vgic_cpu.vgic_v3.its_vpe;
> +     int mask = BIT(irq->intid % BITS_PER_BYTE);
> +     void *va;
> +     u8 *ptr;
> +
> +     va = page_address(vpe->vpt_page);
> +     ptr = va + irq->intid / BITS_PER_BYTE;
> +
> +     *val = !!(*ptr & mask);

What guarantees that you can actually read anything valid? Yes, the
VPT caches are clean. But that doesn't make them coherent with CPU
caches.

You need some level of cache maintenance here.

Thanks,

        M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

Reply via email to