On 8/1/2024 1:22 AM, Eugenio Perez Martin wrote:
On Thu, Aug 1, 2024 at 2:41 AM Si-Wei Liu <si-wei....@oracle.com> wrote:
Hi Jonah,

On 7/31/2024 7:09 AM, Jonah Palmer wrote:
Let me clarify, correct me if I was wrong:

1) IOVA allocator is still implemented via a tree, we just
don't need
to store how the IOVA is used
2) A dedicated GPA -> IOVA tree, updated via listeners and is
used in
the datapath SVQ translation
3) A linear mapping or another SVQ -> IOVA tree used for SVQ

His solution is composed of three trees:
1) One for the IOVA allocations, so we know where to allocate
new ranges
2) One of the GPA -> SVQ IOVA translations.
3) Another one for SVQ vrings translations.

For my understanding, say we have those 3 memory mappings:

HVA                    GPA                IOVA
---------------------------------------------------
Map
(1) [0x7f7903e00000, 0x7f7983e00000)    [0x0, 0x80000000) [0x1000,
0x80000000)
(2) [0x7f7983e00000, 0x7f9903e00000)    [0x100000000, 0x2080000000)
[0x80001000, 0x2000001000)
(3) [0x7f7903ea0000, 0x7f7903ec0000)    [0xfeda0000, 0xfedc0000)
[0x2000001000, 0x2000021000)

And then say when we go to unmap (e.g. vhost_vdpa_svq_unmap_ring)
we're given an HVA of 0x7f7903eb0000, which fits in both the first and
third mappings.

The correct one to remove here would be the third mapping, right? Not
only because the HVA range of the third mapping has a more "specific"
or "tighter" range fit given an HVA of 0x7f7903eb0000 (which, as I
understand, may not always be the case in other scenarios), but mainly
because the HVA->GPA translation would give GPA 0xfedb0000, which only
fits in the third mapping's GPA range. Am I understanding this correctly?
You're correct, we would still need a GPA -> IOVA tree for mapping and
unmapping on guest mem. I've talked to Eugenio this morning and I think
he is now aligned. Granted, this GPA tree is partial in IOVA space that
doesn't contain ranges from host-only memory (e.g. backed by SVQ
descriptors or buffers), we could create an API variant to
vhost_iova_tree_map_alloc() and vhost_iova_tree_map_remove(), which not
just adds IOVA -> HVA range to the HVA tree, but also manipulates the
GPA tree to maintain guest memory mappings, i.e. only invoked from the
memory listener ops. Such that this new API is distinguishable from the
one in the SVQ mapping and unmapping path that only manipulates the HVA
tree.

Right, I think I understand both Jason's and your approach better, and
I think it is the best one. To modify the lookup API is hard, as the
caller does not know if the HVA looked up is contained in the guest
memory or not. To modify the add or remove regions is easier, as they
know it.
Exactly.


I think the only case that you may need to pay attention to in
implementation is in the SVQ address translation path, where if you come
to an HVA address for translation, you would need to tell apart which
tree you'd have to look up - if this HVA is backed by guest mem you
could use API qemu_ram_block_from_host() to infer the ram block then the
GPA, so you end up doing a lookup on the GPA tree; or else the HVA may
be from the SVQ mappings, where you'd have to search the HVA tree again
to look for host-mem-only range before you can claim the HVA is a
bogus/unmapped address...
I'd leave this HVA -> IOVA tree for future performance optimization on
top, and focus on the aliased maps for a first series.

However, calling qemu_ram_block_from_host is actually not needed if
the HVA tree contains all the translations, both SVQ and guest buffers
in memory.
If we don't take account of any aliased map or overlapped HVAs, looking up through the HVA tree itself should work. I think calling qemu_ram_block_from_host() further assures that we always deal with the real ram block that backs up the guest memory, while it is hard to guarantee the same with IOVA -> HVA tree, in case that there exists overlapped HVA ranges. This is simple and reliable since we avoid building the HVA lookup tree around any existing assumption or API implications in the memory subsystem, I'd lean toward using existing memory system API to simplify the implementation of the IOVA -> HVA tree (especially the lookup routine).



For now, this additional second lookup is
sub-optimal but inadvitable, but I think both of us agreed that you
could start to implement this version first, and look for future
opportunity to optimize the lookup performance on top.

Right, thanks for explaining!

Thanks for the discussion!
-Siwei

---

In the case where the first mapping here is removed (GPA [0x0,
0x80000000)), why do we use the word "reintroduce" here? As I
understand it, when we remove a mapping, we're essentially
invalidating the IOVA range associated with that mapping, right? In
other words, the IOVA ranges here don't overlap, so removing a mapping
where its HVA range overlaps another mapping's HVA range shouldn't
affect the other mapping since they have unique IOVA ranges. Is my
understanding correct here or am I probably missing something?
With the GPA tree I think this case should work fine. I've double
checked the implementation of vhost-vdpa iotlb, and doesn't see a red
flag there.

Thanks,
-Siwei




Reply via email to