Hi Oleksandr,
On 15/03/17 20:05, Oleksandr Tyshchenko wrote:
From: Oleksandr Tyshchenko <oleksandr_tyshche...@epam.com>
Update IOMMU mapping if the IOMMU doesn't share page table with the CPU.
The best place to do so on ARM is p2m_set_entry(). Use mfn as an indicator
of the required action. If mfn is valid call iommu_map_pages(),
otherwise - iommu_unmap_pages().
Can you explain in the commit message why you do this change in
p2m_set_entry and not __p2m_set_entry?
Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshche...@epam.com>
---
xen/arch/arm/p2m.c | 40 +++++++++++++++++++++++++++++++++++++++-
1 file changed, 39 insertions(+), 1 deletion(-)
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 1fc6ca3..84d3a09 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -979,7 +979,8 @@ static int __p2m_set_entry(struct p2m_domain *p2m,
if ( p2m_valid(orig_pte) && entry->p2m.base != orig_pte.p2m.base )
p2m_free_entry(p2m, orig_pte, level);
- if ( need_iommu(p2m->domain) && (p2m_valid(orig_pte) || p2m_valid(*entry))
)
+ if ( need_iommu(p2m->domain) && iommu_use_hap_pt(d) &&
+ (p2m_valid(orig_pte) || p2m_valid(*entry)) )
rc = iommu_iotlb_flush(p2m->domain, gfn_x(sgfn), 1UL << page_order);
else
rc = 0;
@@ -997,6 +998,9 @@ int p2m_set_entry(struct p2m_domain *p2m,
p2m_type_t t,
p2m_access_t a)
{
+ unsigned long orig_nr = nr;
+ gfn_t orig_sgfn = sgfn;
+ mfn_t orig_smfn = smfn;
int rc = 0;
while ( nr )
@@ -1029,6 +1033,40 @@ int p2m_set_entry(struct p2m_domain *p2m,
nr -= (1 << order);
}
+ if ( likely(!rc) )
I would do
if ( unlikely(rc) )
return rc;
/* IOMMU map/unmap ... */
This would remove one indentation layer of if.
+ {
+ /*
+ * It's time to update IOMMU mapping if the latter doesn't
+ * share page table with the CPU. Pass the whole memory block to let
+ * the IOMMU code decide what to do with it.
+ * The reason to update IOMMU mapping outside "while loop" is that
+ * the IOMMU might not support the pages/superpages the CPU can deal
+ * with (4K, 2M, 1G) and as result this will lead to non-optimal
+ * mapping.
My worry with this solution is someone may decide to use __p2m_set_entry
(see relinquish) directly because he knows the correct order. So the
IOMMU would be correctly handled when page table are shared and not when
they are "unshared".
I would probably extend __p2m_get_entry with a new parameter indication
whether we want to map the page in the IOMMU directly or not.
Also, do you expect the IOMMU to support a page granularity bigger than
the one supported by Xen? If not, we should probably have a check
somewhere, to prevent potential security issue as we would map more than
expected.
+ * Also we assume that the IOMMU mapping should be updated only
+ * if CPU mapping passed successfully.
+ */
+ if ( need_iommu(p2m->domain) && !iommu_use_hap_pt(p2m->domain) )
+ {
+ if ( !mfn_eq(orig_smfn, INVALID_MFN) )
+ {
+ unsigned int flags = p2m_get_iommu_flags(t);
+
+ rc = iommu_map_pages(p2m->domain,
+ gfn_x(orig_sgfn),
+ mfn_x(orig_smfn),
+ orig_nr,
+ flags);
+ }
+ else
+ {
+ rc = iommu_unmap_pages(p2m->domain,
+ gfn_x(orig_sgfn),
+ orig_nr);
+ }
+ }
+ }
+
return rc;
}
Cheers,
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel