On Mon, 2018-02-26 at 15:01 -0800, Alexander Duyck wrote:
> I am interested in adding a new memory mapping option that
> establishes
> one identity-mapped region for all DMA_TO_DEVICE mappings and creates
> a new dynamic mapping for any DMA_FROM_DEVICE and DMA_BIDIRECTIONAL
> mappings. My thought is it should allow for a compromise between
> security and performance (in the case of networking) in that many of
> the server NICs drivers these days are running with mostly pinned or
> resused paged for Rx. By using an identity mapping for the Tx packets
> we should be able to significantly cut down on the IOMMU overhead for
> the device. The other advantage if this works is that we could use
> this to possibly do something like dirty page tracking in the case of
> a emulated version of the IOMMU.
> 
> I was originally thinking I could get away with just reusing the
> identity mapping code but it looks like that would end up merging
> everything into one domain if I am understanding correctly. Do I have
> that right?
> 
> Would I be correct in assuming that I will need to have a separate
> domain per device, each domain containing the 1 TO_DEVICE identity
> mapped region, and then whatever other mappings are needed to handle
> the FROM and BIDIRECTIONAL mappings?

In the normal model where we explicitly map every RX and TX buffer, you
have a domain device anyway; that's not a new requirement for your
model.

It sounds like an interesting idea; I agree that it's a reasonable
compromise between security and performance. The device can *read* all
of memory, but it can't write anywhere that isn't explicitly mapped.

In addition, we're mapping buffers for RX some time in advance of them
being needed (replenishing the tail of the RX ring), where the latency
of the map operation hopefully shouldn't be quite so much of an issue.
While packets for TX can go straight to the device with no latency.
Overall, I think it might work really well.

You don't want the existing identity mapping code; that will give you a
RW mapping which you don't want — you really do want read-only or this
whole exercise is pointless, right? And you're right, it would have put
the domain into the single identity domain.

You could probably start by mocking this up with the IOMMU API. Create
a domain with the 1:1 read-only mapping of all memory, add your device
to it, and then do your writeable mappings on top (at IOVAs higher than
the top of physical memory). That's probably a quick way to assess
performance and prove the concept (although you don't get deferred
unmap of RX packets that way, which might mess things up a bit).

When we expose this through the DMA API, I'd quite like this *not* to
be Intel-specific. It could reasonably live in a higher layer and be
usable with all kinds of IOMMU implementations.

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
iommu mailing list
[email protected]
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to