Hi Rahul,

On 02/12/2022 11:59, Michal Orzel wrote:
> Hi Rahul,
> 
> On 01/12/2022 17:02, Rahul Singh wrote:
>>
>>
>> The SMMUv3 supports two stages of translation. Each stage of translation can 
>> be
>> independently enabled. An incoming address is logically translated from VA to
>> IPA in stage 1, then the IPA is input to stage 2 which translates the IPA to
>> the output PA.
>>
>> Stage 1 is intended to be used by a software entity to provide isolation or
>> translation to buffers within the entity, for example DMA isolation within an
>> OS. Stage 2 is intended to be available in systems supporting the
>> Virtualization Extensions and is intended to virtualize device DMA to guest 
>> VM
>> address spaces. When both stage 1 and stage 2 are enabled, the translation
>> configuration is called nested.
>>
>> Stage 1 translation support is required to provide isolation between 
>> different
>> devices within OS. XEN already supports Stage 2 translation but there is no
>> support for Stage 1 translation. The goal of this work is to support Stage 1
>> translation for XEN guests. Stage 1 has to be configured within the guest to
>> provide isolation.
>>
>> We cannot trust the guest OS to control the SMMUv3 hardware directly as
>> compromised guest OS can corrupt the SMMUv3 configuration and make the system
>> vulnerable. The guest gets the ownership of the stage 1 page tables and also
>> owns stage 1 configuration structures. The XEN handles the root configuration
>> structure (for security reasons), including the stage 2 configuration.
>>
>> XEN will emulate the SMMUv3 hardware and exposes the virtual SMMUv3 to the
>> guest. Guest can use the native SMMUv3 driver to configure the stage 1
>> translation. When the guest configures the SMMUv3 for Stage 1, XEN will trap
>> the access and configure hardware.
>>
>> SMMUv3 Driver(Guest OS) -> Configure the Stage-1 translation ->
>> XEN trap access -> XEN SMMUv3 driver configure the HW.
>>
>> SMMUv3 driver has to be updated to support the Stage-1 translation support
>> based on work done by the KVM team to support Nested Stage translation:
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Feauger%2Flinux%2Fcommits%2Fv5.11-stallv12-2stage-v14&data=05%7C01%7Cmichal.orzel%40amd.com%7Cecb9075a29974c8f5ad608dad3b5916f%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638055074068482160%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=PdK4%2Bsps3%2FdXYJUDv3iCy%2Byaqbh1bOVb1AFzTtx1nts%3D&reserved=0
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flwn.net%2FArticles%2F852299%2F&data=05%7C01%7Cmichal.orzel%40amd.com%7Cecb9075a29974c8f5ad608dad3b5916f%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638055074068482160%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=5Kp7023HiA4Qbfi28wcPL20JyC2xLwwiyEUZcxTSCOA%3D&reserved=0
>>
>> As the stage 1 translation is configured by XEN on behalf of the guest,
>> translation faults encountered during the translation process need to be
>> propagated up to the guest and re-injected into the guest. When the guest
>> invalidates stage 1 related caches, invalidations must be forwarded to the
>> SMMUv3 hardware.
>>
>> This patch series is sent as RFC to get the initial feedback from the
>> community. This patch series consists of 21 patches which is a big number for
>> the reviewer to review the patches but to understand the feature end-to-end 
>> we
>> thought of sending this as a big series. Once we will get initial feedback, 
>> we
>> will divide the series into a small number of patches for review.
> 
> Due to the very limited availability of the board we have, that is equipped 
> with
> DMA platform devices and SMMUv3 (I know that you tested PCI use case 
> thoroughly),
> I managed for now to do the testing on dom0 only.
> 
> By commenting out the code in Linux responsible for setting up Xen SWIOTLB 
> DMA ops, I was able
> to successfully verify the nested SMMU working properly for DMA platform 
> devices on the
> example of using ZDMA. Both the upstream dmatest client app as well as the 
> VFIO user space driver
> that I wrote for ZDMA passed the test!
> 
> I added some logs to verify the sync up between Linux and Xen during a VFIO 
> test:
> 
> LINUX: SMMUv3: Setting the STE S1 Config 0x1405c000 for SID=0x210
> XEN: vSMMUv3: guest config=ARM_SMMU_DOMAIN_NESTED
> XEN: SMMUv3: Setting the STE S1 Config 0x1405c000 for SID=0x210
> 
> Before transfer example:
>  src value: 0xdb71faf
>  dst value: 0
> Waiting for transfer completion...
> After transfer example:
>  src value: 0xdb71faf
>  dst value: 0xdb71faf
> TEST RESULT: PASS
> 
> LINUX: SMMUv3: Setting the STE S1 Config 0x12502000 for SID=0x210
> XEN: vSMMUv3: guest config=ARM_SMMU_DOMAIN_NESTED
> XEN: SMMUv3: Setting the STE S1 Config 0x12502000 for SID=0x210

I finished testing this series by also covering dom0less and xl domUs.
Tests passed so good job!
I do not have access to any board with more than one IOMMU so I could not 
validate
this behavior.

~Michal


Reply via email to