When emulating a rep I/O operation it is possible that the ioreq will
describe a single operation that spans multiple GFNs. This is fine as long
as all those GFNs fall within an MMIO region covered by a single device
model, but unfortunately the higher levels of the emulation code do not
guarantee that. This is something that should almost certainly be fixed,
but in the meantime this patch makes sure that MMIO is truncated at GFN
boundaries and hence the appropriate device model is re-evaluated for each
NOTE: This patch does not deal with the case of a single MMIO operation
spanning a GFN boundary. That is more complex to deal with and is
deferred to a subsequent patch.
Signed-off-by: Paul Durrant <paul.durr...@citrix.com>
Cc: Jan Beulich <jbeul...@suse.com>
Cc: Andrew Cooper <andrew.coop...@citrix.com>
- Cosmetic fixes suggested by Andrew.
- Make sure we don't end up with p.count == 0, and make sure we end
up with p.count == 1 where a single op spans a GFN boundary.
xen/arch/x86/hvm/emulate.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 8385c62145..602c21ce2a 100644
@@ -184,6 +184,25 @@ static int hvmemul_do_io(
+ * Make sure that we truncate rep MMIO at any GFN boundary. This is
+ * necessary to ensure that the correct device model is targetted
+ * or that we correctly handle a rep op spanning MMIO and RAM.
+ if ( unlikely(p.count > 1) && p.type == IOREQ_TYPE_COPY )
+ unsigned long off = p.addr & ~PAGE_MASK;
+ if ( PAGE_SIZE - off < p.size ) /* single rep spans GFN */
+ p.count = 1;
+ p.count = min_t(unsigned long,
+ ((p.df ? (off + p.size) : (PAGE_SIZE - off)) /
vio->io_req = p;
rc = hvm_io_intercept(&p);
Xen-devel mailing list