Re: [Xen-devel] [PATCH 2/6] x86/shadow: Fixes to hvm_emulate_insn_fetch()

2017-06-22 Thread Tim Deegan
Hi,

At 16:12 +0100 on 21 Jun (1498061548), Andrew Cooper wrote:
> Zero-legnth reads are jump-target segmentation checks; never serve them from
> the cache.

Why not?  If the target is in the cached range, then it has passed the
segmentation check.  (Or if that's not true then the normal fetch path
needs to be fixed too).

> Force insn_off to a single byte, as offset can wrap around or truncate with
> respect to sh_ctxt->insn_buf_eip under a number of normal circumstances.

Wouldn't it be better to detect that and fall through?  Otherwise we
might return cached bytes by mistake.

Tim.

> Signed-off-by: Andrew Cooper 
> ---
> CC: Tim Deegan 
> CC: Jan Beulich 
> ---
>  xen/arch/x86/mm/shadow/common.c | 10 +++---
>  1 file changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
> index 2e64a77..deea03a 100644
> --- a/xen/arch/x86/mm/shadow/common.c
> +++ b/xen/arch/x86/mm/shadow/common.c
> @@ -235,12 +235,16 @@ hvm_emulate_insn_fetch(enum x86_segment seg,
>  {
>  struct sh_emulate_ctxt *sh_ctxt =
>  container_of(ctxt, struct sh_emulate_ctxt, ctxt);
> -unsigned int insn_off = offset - sh_ctxt->insn_buf_eip;
> +/* Careful, as offset can wrap or truncate WRT insn_buf_eip. */
> +uint8_t insn_off = offset - sh_ctxt->insn_buf_eip;
>  
>  ASSERT(seg == x86_seg_cs);
>  
> -/* Fall back if requested bytes are not in the prefetch cache. */
> -if ( unlikely((insn_off + bytes) > sh_ctxt->insn_buf_bytes) )
> +/*
> + * Fall back if requested bytes are not in the prefetch cache, but always
> + * perform the zero-length read for segmentation purposes.
> + */
> +if ( !bytes || unlikely((insn_off + bytes) > sh_ctxt->insn_buf_bytes) )
>  return hvm_read(seg, offset, p_data, bytes,
>  hvm_access_insn_fetch, sh_ctxt);
>  
> -- 
> 2.1.4
> 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 2/6] x86/shadow: Fixes to hvm_emulate_insn_fetch()

2017-06-22 Thread Jan Beulich
>>> On 21.06.17 at 17:12,  wrote:
> --- a/xen/arch/x86/mm/shadow/common.c
> +++ b/xen/arch/x86/mm/shadow/common.c
> @@ -235,12 +235,16 @@ hvm_emulate_insn_fetch(enum x86_segment seg,
>  {
>  struct sh_emulate_ctxt *sh_ctxt =
>  container_of(ctxt, struct sh_emulate_ctxt, ctxt);
> -unsigned int insn_off = offset - sh_ctxt->insn_buf_eip;
> +/* Careful, as offset can wrap or truncate WRT insn_buf_eip. */
> +uint8_t insn_off = offset - sh_ctxt->insn_buf_eip;
>  
>  ASSERT(seg == x86_seg_cs);
>  
> -/* Fall back if requested bytes are not in the prefetch cache. */
> -if ( unlikely((insn_off + bytes) > sh_ctxt->insn_buf_bytes) )
> +/*
> + * Fall back if requested bytes are not in the prefetch cache, but always
> + * perform the zero-length read for segmentation purposes.
> + */
> +if ( !bytes || unlikely((insn_off + bytes) > sh_ctxt->insn_buf_bytes) )
>  return hvm_read(seg, offset, p_data, bytes,
>  hvm_access_insn_fetch, sh_ctxt);

I only note this now - patch one has an (apparently unnecessary)
insn_off > sizeof() check ahead of the insn_off + bytes > sizeof()
one. If you see a need for it to be there, why don't you put it here
too? In any event I'd like to ask for both pieces of code to match
up in the end (i.e. I'm fine with the adjustment being done on either
side), and then
Reviewed-by: Jan Beulich 

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH 2/6] x86/shadow: Fixes to hvm_emulate_insn_fetch()

2017-06-21 Thread Andrew Cooper
Zero-legnth reads are jump-target segmentation checks; never serve them from
the cache.

Force insn_off to a single byte, as offset can wrap around or truncate with
respect to sh_ctxt->insn_buf_eip under a number of normal circumstances.

Signed-off-by: Andrew Cooper 
---
CC: Tim Deegan 
CC: Jan Beulich 
---
 xen/arch/x86/mm/shadow/common.c | 10 +++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 2e64a77..deea03a 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -235,12 +235,16 @@ hvm_emulate_insn_fetch(enum x86_segment seg,
 {
 struct sh_emulate_ctxt *sh_ctxt =
 container_of(ctxt, struct sh_emulate_ctxt, ctxt);
-unsigned int insn_off = offset - sh_ctxt->insn_buf_eip;
+/* Careful, as offset can wrap or truncate WRT insn_buf_eip. */
+uint8_t insn_off = offset - sh_ctxt->insn_buf_eip;
 
 ASSERT(seg == x86_seg_cs);
 
-/* Fall back if requested bytes are not in the prefetch cache. */
-if ( unlikely((insn_off + bytes) > sh_ctxt->insn_buf_bytes) )
+/*
+ * Fall back if requested bytes are not in the prefetch cache, but always
+ * perform the zero-length read for segmentation purposes.
+ */
+if ( !bytes || unlikely((insn_off + bytes) > sh_ctxt->insn_buf_bytes) )
 return hvm_read(seg, offset, p_data, bytes,
 hvm_access_insn_fetch, sh_ctxt);
 
-- 
2.1.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel