Hi Alex, Answers to questions I can answer are in-line. First an apology though---the machine with the FPGA board is 1000 miles remote, and I don't have root access. It's unlikely I will be able to do kernel patch testing.
Alex Williamson scribed the following, on or around Fri, Mar 25, 2022 at 04:10:22PM -0600: > Hi Daniel, > ... > > Coherency possibly. > > There's a possible coherency issue at the compare depending on the > IOMMU capabilities which could affect whether DMA is coherent to memory > or requires an explicit flush. I'm a little suspicious whether dmar0 > is really the IOMMU controlling this device since you mention a 39bit > IOVA space, which is more typical of Intel client platforms which can > also have integrated graphics which often have a dedicated IOMMU at > dmar0 that isn't necessarily representative of the other IOMMUs in the > system, especially with regard to snoop-control. Each dmar lists the > managed devices under it in sysfs to verify. Support for snoop-control > would be identified in the ecap register rather than the cap register. > VFIO can also report coherency via the VFIO_DMA_CC_IOMMU extension > reported by VFIO_CHECK_EXTENSION ioctl. $ cat /sys/devices/virtual/iommu/dmar0/intel-iommu/cap d2008c40660462 $ cat /sys/devices/virtual/iommu/dmar0/intel-iommu/ecap f050da $ lscpu | grep Model Model: 165 Model name: Intel(R) Xeon(R) W-1290P CPU @ 3.70GHz $ ls -l /sys/devices/virtual/iommu/dmar0/devices | wc -l 24 $ ... ioctl(container_fd, VFIO_CHECK_EXTENSION, VFIO_DMA_CC_IOMMU) 0 What are the implications of having no "IOMMU enforces DMA cache conherence"? On this machine there is no access to a PCIe bus analyzer, but it's very unlikely that the TLPs would have NoSnoop set. Is there a good way How can I tell what IOMMU I'm using? (I did think it was strange that the IOMMU in this machine cannot handle enough bits for mapping IOVA==VMA. The test code is running in a podman container, but (naively) I wouldn't expect that to make a difference.) > However, CPU coherency might lead to a miscompare, but not necessarily a > miscompare matching the previous iteration. Still, for completeness > let's make sure this isn't a gap in the test programming making invalid > assumptions about CPU/DMA coherency. > > The fact that randomizing the IOVA provides a workaround though might > suggest something relative to the IOMMU page table coherency. But for > the new mmap target to have the data from the previous iteration, the > IOMMU PTE would need to be stale on read, but correct on write in order > to land back in your new mmap. That seems peculiar. Are we sure the > FPGA device isn't caching the value at the IOVA or using any sort of > IOTLB caching such as ATS that might not be working correctly? I cannot say for certain what the FPGA caches, if anything. The IP for that part is closed (search for Xilinx PG302 QDMA). It should (!) be well-tested... oh for an analyzer! > > Suggestion: Document issue when using fixed IOVA, or fix if security > > is a concern. > > I don't know that there's enough information here to make any > conclusions. Here are some further questions: > > * What size mappings are being used, both for the mmap and the VFIO > MAP/UNMAP operations. The test would often fail switching from an 8KB allocation to 12KB where the VMA would grow down by a page. The mmap() always returned a 4KB aligned VMA, and the requested mmap() size was always an exact number of 4KB pages. The VFIO map operations were always on the full extent of the mmap'd memory (likely makes Baulu's patch moot in this case). A typical (not consistent) syndrome would be: 1st page: ok 2nd page: previous mmap'd data. 3rd page: ok We saw the issue on transfers both to and from the card. I.e., we placed a memory block in the FPGA that we could interrogate when data were corrupted. (And as mentioned, just changing the IOVA fixed this issue.) > * If the above is venturing into super page support (2MB), does the > vfio_iommu_type1 module option disable_hugepages=1 affect the > results. N/A. > * Along the same lines, does the kernel command line option > intel_iommu=sp_off produce different results. Would this affect small pages? > * Does this behavior also occur on upstream kernels (ie. v5.17)? Unknown, and (unfortunately) untestable at present. > * Do additional CPU cache flushes in the test program produce different > results? We did a number of experiments using combinations of MAP_LOCKED, mlock(), barrier(), _mm_clflush(). They all affected reliability of the test (through timing?), but all ultimately failed. I'm happy to try other flushes that can be achieved in non-root user space! > * Is this a consumer available FPGA device that others might be able > to reproduce this issue? I've always wanted such a device for > testing, but also we can't rule out that the FPGA itself or its > programming is the source of the miscompare. https://www.xilinx.com/products/boards-and-kits/vcu118.html Just don't look at the price too hard! > >From the vfio perspective, UNMAP_DMA should first unmap the pages at > the IOMMU to prevent device access before unpinning the pages. We do > make use of batch unmapping to reduce iotlb flushing, but the result is > expected to be that the IOMMU PTE entries are invalidated before the > UNMAP_DMA operation completes. A stale IOVA would not be expected or > correct operation. Thanks, > > Alex Thanks. Daniel _______________________________________________ iommu mailing list [email protected] https://lists.linuxfoundation.org/mailman/listinfo/iommu
