On Wed, Sep 04, 2019 at 08:49:03AM +0200, Thomas Hellström (VMware) wrote: > For device DMA address purposes, the encryption status is encoded in the dma > address by the dma layer in phys_to_dma(). > > > > There doesn’t seem to be any real funny business in dma_mmap_attrs() or > > dma_common_mmap(). > > No, from what I can tell the call in these functions to dma_pgprot() > generates an incorrect page protection since it doesn't take unencrypted > coherent memory into account. I don't think anybody has used these functions > yet with SEV.
Yes, I think dma_pgprot is not correct for SEV. Right now that function isn't used much on x86, it had more grave bugs up to a few -rcs ago.. > > Would it make sense to add a vmf_insert_dma_page() to directly do exactly > > what you’re trying to do? > > Yes, but as a longer term solution I would prefer a general dma_pgprot() > exported, so that we could, in a dma-compliant way, use coherent pages with > other apis, like kmap_atomic_prot() and vmap(). That is, basically split > coherent page allocation in two steps: Allocation and mapping. The thing is that dma_pgprot is of no help for you at all, as the DMA API hides the page from you entirely. In fact we do have backends that do not even have a page backing. But I think we can have a vmf_insert_page equivalent that does the right thing behind your back for the varius different implementation (contiguous page(s) in the kernel lineary, contiguous page(s) with a vmap/ioremap remapping in various flavours, non-contigous pages(s) with a vmap remapping, and deeply magic firmware populated pools (well, except maybe for the last, but at least we can fail gracefully there)).