On 2020/6/30 下午5:21, Michael S. Tsirkin wrote:
On Tue, Jun 30, 2020 at 04:29:19PM +0800, Jason Wang wrote:
On 2020/6/30 上午10:41, Jason Wang wrote:
On 2020/6/29 下午9:34, Peter Xu wrote:
On Mon, Jun 29, 2020 at 01:51:47PM +0800, Jason Wang wrote:
On 2020/6/28 下午10:47, Peter Xu wrote:
On Sun, Jun 28, 2020 at 03:03:41PM +0800, Jason Wang wrote:
On 2020/6/27 上午5:29, Peter Xu wrote:
Hi, Eugenio,
(CCing Eric, Yan and Michael too)
On Fri, Jun 26, 2020 at 08:41:22AM +0200, Eugenio Pérez wrote:
diff --git a/memory.c b/memory.c
index 2f15a4b250..7f789710d2 100644
--- a/memory.c
+++ b/memory.c
@@ -1915,8 +1915,6 @@ void
memory_region_notify_one(IOMMUNotifier
*notifier,
return;
}
- assert(entry->iova >= notifier->start &&
entry_end <= notifier->end);
I can understand removing the assertion should solve
the issue, however imho
the major issue is not about this single assertion
but the whole addr_mask
issue behind with virtio...
I don't get here, it looks to the the range was from
guest IOMMU drivers.
Yes. Note that I didn't mean that it's a problem in virtio,
it's just the fact
that virtio is the only one I know that would like to
support arbitrary address
range for the translated region. I don't know about tcg,
but vfio should still
need some kind of page alignment in both the address and the
addr_mask. We
have that assumption too across the memory core when we do
translations.
Right but it looks to me the issue is not the alignment.
A further cause of the issue is the MSI region when vIOMMU
enabled - currently
we implemented the interrupt region using another memory
region so it split the
whole DMA region into two parts. That's really a clean approach to IR
implementation, however that's also a burden to the
invalidation part because
then we'll need to handle things like this when the listened
range is not page
alighed at all (neither 0-0xfedffff, nor 0xfef0000-MAX). If
without the IR
region (so the whole iommu address range will be a single FlatRange),
Is this a bug? I remember that at least for vtd, it won't do any
DMAR on the
intrrupt address range
I don't think it's a bug, at least it's working as how I
understand... that
interrupt range is using an IR region, that's why I said the IR
region splits
the DMAR region into two pieces, so we have two FlatRange for the same
IOMMUMemoryRegion.
I don't check the qemu code but if "a single FlatRange" means
0xFEEx_xxxx is subject to DMA remapping, OS need to setup passthrough
mapping for that range in order to get MSI to work. This is not what vtd
spec said:
"""
3.14 Handling Requests to Interrupt Address Range
Requests without PASID to address range 0xFEEx_xxxx are treated as
potential interrupt requests and are not subjected to DMA remapping
(even if translation structures specify a mapping for this
range). Instead, remapping hardware can be enabled to subject such
interrupt requests to interrupt remapping.
"""
My understanding is vtd won't do any DMA translation on 0xFEEx_xxxx even
if IR is not enabled.
Ok, we had a dedicated mr for interrupt:
memory_region_add_subregion_overlap(MEMORY_REGION(&vtd_dev_as->iommu),
VTD_INTERRUPT_ADDR_FIRST,
&vtd_dev_as->iommu_ir, 1);
So it should be fine. I guess the reason that I'm asking is that I thought
"IR" means "Interrupt remapping" but in fact it means "Interrupt Region"?
But I'm still not clear about the invalidation part for interrupt region,
maybe you can elaborate a little more on this.
Btw, I think guest can trigger the assert in vtd_do_iommu_translate() if we
teach vhost to DMA to that region:
Why would we want to?
I meant a buggy(malicious) guest driver.
Thanks
/*
* We have standalone memory region for interrupt addresses, we
* should never receive translation requests in this region.
*/
assert(!vtd_is_interrupt_addr(addr));
Is this better to return false here? (We can work on the fix for vhost but
it should be not trivial)
Thanks