On 1 September 2017 at 18:21, Eric Auger <eric.au...@redhat.com> wrote: > In VFIO use cases, the virtual smmu translates IOVA->IPA (stage 1) > whereas the physical SMMU translates IPA -> host PA (stage 2). > > The 2 stages of the physical SMMU are currently not used. > Instead both stage 1 and stage2 mappings are combined together > and programmed in a single stage (S1) in the physical SMMU. > > The drawback of this approach is each time the IOVA->IPA mapping > is changed by the guest, the host must be notified to re-program > the physical SMMU with the combined stages. > > So we need to trap into the QEMU device each time the guest alters > the configuration or TLB data. Unfortunately the SMMU does not > expose any caching mode as the Intel IOMMU. On Intel, this caching > mode HW bit informs the OS that each time it updates the remapping > structures (even on map) it must invalidate the caches. Those > invalidate commands are used to notify the host that it must > recompute S1+S2 mappings and reprogram the HW. > > As we don't have the HW bit on ARM, we currently rely on a > a FW quirk on guest smmuv3 driver side. When this FW quirk is > applied the driver performs TLB invalidations on map and > sends SMMU_CMD_TLBI_NH_VA_AM commands. > > Those TLB invalidations are used to trap changes in the > translation tables. > > We introduced a new implemented defined SMMU_CMD_TLBI_NH_VA_AM > command since it allows to inavlidate a whole range instead > of invalidating a single page (native SMMU_CMD_TLBI_NH_VA command). > > As a consequence anybody wanting to use virtual smmuv3 in VFIO > use case must add > -device smmuv3,caching-mode > to the option line.
Even more of a NACK on this one. We shouldn't need to do weird things to be able to use the SMMU in a VM. We need to figure out how the spec expects us (and the kernel) to be using the SMMU, and do that. thanks -- PMM