Re: [PATCH v13 07/15] iommu/smmuv3: Allow stage 1 invalidation with unmanaged ASIDs

2021-02-15 Thread Auger Eric
Hi Shameer,

On 12/3/20 7:42 PM, Shameerali Kolothum Thodi wrote:
> Hi Eric,
> 
>> -Original Message-
>> From: kvmarm-boun...@lists.cs.columbia.edu
>> [mailto:kvmarm-boun...@lists.cs.columbia.edu] On Behalf Of Auger Eric
>> Sent: 01 December 2020 13:59
>> To: wangxingang 
>> Cc: Xieyingtai ; jean-phili...@linaro.org;
>> k...@vger.kernel.org; m...@kernel.org; j...@8bytes.org; w...@kernel.org;
>> iommu@lists.linux-foundation.org; linux-ker...@vger.kernel.org;
>> vivek.gau...@arm.com; alex.william...@redhat.com;
>> zhangfei@linaro.org; robin.mur...@arm.com;
>> kvm...@lists.cs.columbia.edu; eric.auger@gmail.com
>> Subject: Re: [PATCH v13 07/15] iommu/smmuv3: Allow stage 1 invalidation with
>> unmanaged ASIDs
>>
>> Hi Xingang,
>>
>> On 12/1/20 2:33 PM, Xingang Wang wrote:
>>> Hi Eric
>>>
>>> On  Wed, 18 Nov 2020 12:21:43, Eric Auger wrote:
 @@ -1710,7 +1710,11 @@ static void arm_smmu_tlb_inv_context(void
>> *cookie)
 * insertion to guarantee those are observed before the TLBI. Do be
 * careful, 007.
 */
 -  if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
 +  if (ext_asid >= 0) { /* guest stage 1 invalidation */
 +  cmd.opcode  = CMDQ_OP_TLBI_NH_ASID;
 +  cmd.tlbi.asid   = ext_asid;
 +  cmd.tlbi.vmid   = smmu_domain->s2_cfg.vmid;
 +  } else if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
>>>
>>> Found a problem here, the cmd for guest stage 1 invalidation is built,
>>> but it is not delivered to smmu.
>>>
>>
>> Thank you for the report. I will fix that soon. With that fixed, have
>> you been able to run vSVA on top of the series. Do you need other stuff
>> to be fixed at SMMU level? 
> 
> I am seeing another issue with this series. This is when you have the vSMMU
> in non-strict mode(iommu.strict=0). Any network pass-through dev with iperf 
> run 
> will be enough to reproduce the issue. It may randomly stop/hang.
> 
> It looks like the .flush_iotlb_all from guest is not propagated down to the 
> host
> correctly. I have a temp hack to fix this in Qemu wherein CMDQ_OP_TLBI_NH_ASID
> will result in a CACHE_INVALIDATE with IOMMU_INV_GRANU_PASID flag and archid
> set.

Thank you for the analysis. Indeed the NH_ASID was not properly handled
as asid info was not passed down. I fixed domain invalidation and added
asid based invalidation.

Thanks

Eric
> 
> Please take a look and let me know. 
> 
> As I am going to respin soon, please let me
>> know what is the best branch to rebase to alleviate your integration.
> 
> Please find the latest kernel and Qemu branch with vSVA support added here,
> 
> https://github.com/hisilicon/kernel-dev/tree/5.10-rc4-2stage-v13-vsva
> https://github.com/hisilicon/qemu/tree/v5.2.0-rc1-2stage-rfcv7-vsva
> 
> I have done some basic minimum vSVA tests on a HiSilicon D06 board with
> a zip dev that supports STALL. All looks good so far apart from the issues
> that have been already reported/discussed.
> 
> The kernel branch is actually a rebase of sva/uacce related patches from a
> Linaro branch here,
> 
> https://github.com/Linaro/linux-kernel-uadk/tree/uacce-devel-5.10
> 
> I think going forward it will be good(if possible) to respin your series on 
> top of
> a sva branch with STALL/PRI support added. 
> 
> Hi Jean/zhangfei,
> Is it possible to have a branch with minimum required SVA/UACCE related 
> patches
> that are already public and can be a "stable" candidate for future respin of 
> Eric's series?
> Please share your thoughts.
> 
> Thanks,
> Shameer 
> 
>> Best Regards
>>
>> Eric
>>
>> ___
>> kvmarm mailing list
>> kvm...@lists.cs.columbia.edu
>> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
> 

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 5/6] driver core: lift dma_default_coherent into common code

2021-02-15 Thread Maciej W. Rozycki
On Tue, 9 Feb 2021, Maciej W. Rozycki wrote:

> > >  Do you need to have this verified anyhow?  I only have a non-coherent 
> > > 5Kc 
> > > Malta though.
> > 
> > If you get a chance to test this logic, that would be great.
> 
>  I'll try to give it a hit in the next few days then.  Installed in my 
> Malta I have a DEFPA, which is about as serious a DMA user as a piece of 
> classic PCI hardware could be.  I need to debug the issue of another DEFPA 
> not working with my POWER9 system, possibly due to an IOMMU handling bug 
> (hopefully not broken host hardware), so I'll take the opportunity and do 
> it all at once.

 FYI still working on it.  The POWER9 issue turned out to be a combination 
of a driver configuration issue with the distribution caused by a chain of 
historical events leading to the use of PCI I/O bus commands not supported 
by the PHB PCIe host bridge and a bad solder joint with the adapter's main 
PDQ IC on a 20+ years old brand new card.

 I hope to have the adapter properly fixed soon and I'll look at the Malta 
side now, possibly using the old server whose DEFPA has worked flawlessly 
for some 20 years now.  I have planned to use the interface to supply NFS 
root, which I think should be enough of a stress test.

 Patches will follow sometime too for the driver's configuration issue, a 
nonsense in 2021 I should have long addressed, and for resource handling 
which I think should explicitly fail port I/O claims on a system that does 
not support port I/O at all and should not allow this:

# cat /proc/ioports
- : 0031:02:04.0
# 

to happen.

  Maciej
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu