>>> On 24.03.16 at 12:49, <paul.durr...@citrix.com> wrote: >> From: Jan Beulich [mailto:jbeul...@suse.com] >> Sent: 24 March 2016 11:29 >> @@ -196,8 +196,22 @@ int hvm_process_io_intercept(const struc >> } >> } >> >> - if ( i != 0 && rc == X86EMUL_UNHANDLEABLE ) >> + if ( unlikely(rc < 0) ) >> domain_crash(current->domain); >> + else if ( i ) >> + { >> + p->count = i; >> + rc = X86EMUL_OKAY; >> + } >> + else if ( rc == X86EMUL_UNHANDLEABLE ) >> + { >> + /* >> + * Don't forward entire batches to the device model: This would >> + * prevent the internal handlers to see subsequent iterations of >> + * the request. >> + */ >> + p->count = 1; > > I guess this is ok. If stdvga is not caching then the accept function would > have failed so you won't get here, and if it send the buffered ioreq then you > still don't get here because it returns X86EMUL_OKAY.
Good that you thought of this - I had forgotten that stdvga's MMIO handling now takes this same code path rather than a fully separate one. I guess I'll steal some of the wording above for the v2 commit message. Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel