Re: [Xen-devel] [PATCH 1/2] x86/HVM: suppress I/O completion for port output

2018-04-10 Thread Juergen Gross
On 09/04/18 15:23, Jan Beulich wrote:
> We don't break up port requests in case they cross emulation entity
> boundaries, and a write to an I/O port is necessarily the last
> operation of an instruction instance, so there's no need to re-invoke
> the full emulation path upon receiving the result from an external
> emulator.
> 
> In case we want to properly split port accesses in the future, this
> change will need to be reverted, as it would prevent things working
> correctly when e.g. the first part needs to go to an external emulator,
> while the second part is to be handled internally.
> 
> While this addresses the reported problem of Windows paging out the
> buffer underneath an in-process REP OUTS, it does not address the wider
> problem of the re-issued insn (to the insn emulator) being prone to
> raise an exception (#PF) during a replayed, previously successful memory
> access (we only record prior MMIO accesses).
> 
> Leaving aside the problem tried to be worked around here, I think the
> performance aspect alone is a good reason to change the behavior.
> 
> Also take the opportunity and change bool_t -> bool as
> hvm_vcpu_io_need_completion()'s return type.
> 
> Signed-off-by: Jan Beulich 

Release-acked-by: Juergen Gross 


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 1/2] x86/HVM: suppress I/O completion for port output

2018-04-09 Thread Paul Durrant
> -Original Message-
> From: Jan Beulich [mailto:jbeul...@suse.com]
> Sent: 09 April 2018 14:24
> To: xen-devel 
> Cc: Andrew Cooper ; Paul Durrant
> ; Juergen Gross 
> Subject: [PATCH 1/2] x86/HVM: suppress I/O completion for port output
> 
> We don't break up port requests in case they cross emulation entity
> boundaries, and a write to an I/O port is necessarily the last
> operation of an instruction instance, so there's no need to re-invoke
> the full emulation path upon receiving the result from an external
> emulator.
> 
> In case we want to properly split port accesses in the future, this
> change will need to be reverted, as it would prevent things working
> correctly when e.g. the first part needs to go to an external emulator,
> while the second part is to be handled internally.
> 
> While this addresses the reported problem of Windows paging out the
> buffer underneath an in-process REP OUTS, it does not address the wider
> problem of the re-issued insn (to the insn emulator) being prone to
> raise an exception (#PF) during a replayed, previously successful memory
> access (we only record prior MMIO accesses).
> 
> Leaving aside the problem tried to be worked around here, I think the
> performance aspect alone is a good reason to change the behavior.
> 
> Also take the opportunity and change bool_t -> bool as
> hvm_vcpu_io_need_completion()'s return type.
> 
> Signed-off-by: Jan Beulich 

Reviewed-by: Paul Durrant 

> 
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -282,7 +282,11 @@ static int hvmemul_do_io(
>  rc = hvm_send_ioreq(s, , 0);
>  if ( rc != X86EMUL_RETRY || currd->is_shutting_down )
>  vio->io_req.state = STATE_IOREQ_NONE;
> -else if ( data_is_addr )
> +/*
> + * This effectively is !hvm_vcpu_io_need_completion(vio), 
> slightly
> + * optimized and using local variables we have available.
> + */
> +else if ( data_is_addr || (!is_mmio && dir == IOREQ_WRITE) )
>  rc = X86EMUL_OKAY;
>  }
>  break;
> --- a/xen/include/asm-x86/hvm/vcpu.h
> +++ b/xen/include/asm-x86/hvm/vcpu.h
> @@ -91,10 +91,12 @@ struct hvm_vcpu_io {
>  const struct g2m_ioport *g2m_ioport;
>  };
> 
> -static inline bool_t hvm_vcpu_io_need_completion(const struct
> hvm_vcpu_io *vio)
> +static inline bool hvm_vcpu_io_need_completion(const struct
> hvm_vcpu_io *vio)
>  {
>  return (vio->io_req.state == STATE_IOREQ_READY) &&
> -   !vio->io_req.data_is_ptr;
> +   !vio->io_req.data_is_ptr &&
> +   (vio->io_req.type != IOREQ_TYPE_PIO ||
> +vio->io_req.dir != IOREQ_WRITE);
>  }
> 
>  struct nestedvcpu {
> 
> 


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH 1/2] x86/HVM: suppress I/O completion for port output

2018-04-09 Thread Jan Beulich
We don't break up port requests in case they cross emulation entity
boundaries, and a write to an I/O port is necessarily the last
operation of an instruction instance, so there's no need to re-invoke
the full emulation path upon receiving the result from an external
emulator.

In case we want to properly split port accesses in the future, this
change will need to be reverted, as it would prevent things working
correctly when e.g. the first part needs to go to an external emulator,
while the second part is to be handled internally.

While this addresses the reported problem of Windows paging out the
buffer underneath an in-process REP OUTS, it does not address the wider
problem of the re-issued insn (to the insn emulator) being prone to
raise an exception (#PF) during a replayed, previously successful memory
access (we only record prior MMIO accesses).

Leaving aside the problem tried to be worked around here, I think the
performance aspect alone is a good reason to change the behavior.

Also take the opportunity and change bool_t -> bool as
hvm_vcpu_io_need_completion()'s return type.

Signed-off-by: Jan Beulich 

--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -282,7 +282,11 @@ static int hvmemul_do_io(
 rc = hvm_send_ioreq(s, , 0);
 if ( rc != X86EMUL_RETRY || currd->is_shutting_down )
 vio->io_req.state = STATE_IOREQ_NONE;
-else if ( data_is_addr )
+/*
+ * This effectively is !hvm_vcpu_io_need_completion(vio), slightly
+ * optimized and using local variables we have available.
+ */
+else if ( data_is_addr || (!is_mmio && dir == IOREQ_WRITE) )
 rc = X86EMUL_OKAY;
 }
 break;
--- a/xen/include/asm-x86/hvm/vcpu.h
+++ b/xen/include/asm-x86/hvm/vcpu.h
@@ -91,10 +91,12 @@ struct hvm_vcpu_io {
 const struct g2m_ioport *g2m_ioport;
 };
 
-static inline bool_t hvm_vcpu_io_need_completion(const struct hvm_vcpu_io *vio)
+static inline bool hvm_vcpu_io_need_completion(const struct hvm_vcpu_io *vio)
 {
 return (vio->io_req.state == STATE_IOREQ_READY) &&
-   !vio->io_req.data_is_ptr;
+   !vio->io_req.data_is_ptr &&
+   (vio->io_req.type != IOREQ_TYPE_PIO ||
+vio->io_req.dir != IOREQ_WRITE);
 }
 
 struct nestedvcpu {




___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel