On Thu, 21 Oct 2021 03:22:30 +0100,
Lu Baolu <baolu...@linux.intel.com> wrote:
> 
> On 10/20/21 10:22 PM, Marc Zyngier wrote:
> > On Wed, 20 Oct 2021 06:21:44 +0100,
> > Lu Baolu <baolu...@linux.intel.com> wrote:
> >> 
> >> On 2021/10/20 0:37, Sven Peter via iommu wrote:
> >>> + /*
> >>> +  * Check that CPU pages can be represented by the IOVA granularity.
> >>> +  * This has to be done after ops->attach_dev since many IOMMU drivers
> >>> +  * only limit domain->pgsize_bitmap after having attached the first
> >>> +  * device.
> >>> +  */
> >>> + ret = iommu_check_page_size(domain);
> >>> + if (ret) {
> >>> +         __iommu_detach_device(domain, dev);
> >>> +         return ret;
> >>> + }
> >> 
> >> It looks odd. __iommu_attach_device() attaches an I/O page table for a
> >> device. How does it relate to CPU pages? Why is it a failure case if CPU
> >> page size is not covered?
> > 
> > If you allocate a CPU PAGE_SIZE'd region, and point it at a device
> > that now can DMA to more than what you have allocated because the
> > IOMMU's own page size is larger, the device has now access to data it
> > shouldn't see. In my book, that's a pretty bad thing.
> 
> But even you enforce the CPU page size check here, this problem still
> exists unless all DMA buffers are PAGE_SIZE aligned and sized, right?

Let me take a CPU analogy: you have a page that contains some user
data *and* a kernel secret. How do you map this page into userspace
without leaking the kernel secret?

PAGE_SIZE allocations are the unit of isolation, and this applies to
both CPU and IOMMU. If you have allocated a DMA buffer that is less
than a page, you then have to resort to bounce buffering, or accept
that your data isn't safe.

        M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to