Re: [Xen-devel] [PATCH v3] x86/shadow: Correct guest behaviour when creating PTEs above maxphysaddr

2017-02-20 Thread Tim Deegan
At 15:45 + on 16 Feb (1487259954), Andrew Cooper wrote:
> XSA-173 (c/s 8b1764833) introduces gfn_bits, and an upper limit which might be
> lower than the real maxphysaddr, to avoid overflowing the superpage shadow
> backpointer.
> 
> However, plenty of hardware has a physical address width less that 44 bits,
> and the code added in shadow_domain_init() is a straight assignment.  This
> causes gfn_bits to be increased beyond the physical address width on most
> Intel consumer hardware (typically a width of 39, which is the number reported
> to the guest via CPUID).
> 
> If the guest intentionally creates a PTE referencing a physical address
> between 39 and 44 bits, the result should be #PF[RSVD] for using the virtual
> address.  However, the shadow code accepts the PTE, shadows it, and the
> virtual address works normally.
> 
> Introduce paging_max_paddr_bits() to calculate the largest guest physical
> address supportable by the paging infrastructure, and update
> recalculate_cpuid_policy() to take this into account when clamping the guests
> cpuid_policy to reality.
> 
> There is an existing gfn_valid() in guest_pt.h but it is unused in the
> codebase.  Repurpose it to perform a guest-specific maxphysaddr check, which
> replaces the users of gfn_bits.
> 
> Signed-off-by: Andrew Cooper 

Reviewed-by: Tim Deegan 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v3] x86/shadow: Correct guest behaviour when creating PTEs above maxphysaddr

2017-02-16 Thread Tian, Kevin
> From: Andrew Cooper [mailto:andrew.coop...@citrix.com]
> Sent: Thursday, February 16, 2017 11:46 PM
> 
> XSA-173 (c/s 8b1764833) introduces gfn_bits, and an upper limit which might be
> lower than the real maxphysaddr, to avoid overflowing the superpage shadow
> backpointer.
> 
> However, plenty of hardware has a physical address width less that 44 bits,
> and the code added in shadow_domain_init() is a straight assignment.  This
> causes gfn_bits to be increased beyond the physical address width on most
> Intel consumer hardware (typically a width of 39, which is the number reported
> to the guest via CPUID).
> 
> If the guest intentionally creates a PTE referencing a physical address
> between 39 and 44 bits, the result should be #PF[RSVD] for using the virtual
> address.  However, the shadow code accepts the PTE, shadows it, and the
> virtual address works normally.
> 
> Introduce paging_max_paddr_bits() to calculate the largest guest physical
> address supportable by the paging infrastructure, and update
> recalculate_cpuid_policy() to take this into account when clamping the guests
> cpuid_policy to reality.
> 
> There is an existing gfn_valid() in guest_pt.h but it is unused in the
> codebase.  Repurpose it to perform a guest-specific maxphysaddr check, which
> replaces the users of gfn_bits.
> 
> Signed-off-by: Andrew Cooper 

Reviewed-by: Kevin Tian 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v3] x86/shadow: Correct guest behaviour when creating PTEs above maxphysaddr

2017-02-16 Thread George Dunlap
On 16/02/17 15:45, Andrew Cooper wrote:
> XSA-173 (c/s 8b1764833) introduces gfn_bits, and an upper limit which might be
> lower than the real maxphysaddr, to avoid overflowing the superpage shadow
> backpointer.
> 
> However, plenty of hardware has a physical address width less that 44 bits,
> and the code added in shadow_domain_init() is a straight assignment.  This
> causes gfn_bits to be increased beyond the physical address width on most
> Intel consumer hardware (typically a width of 39, which is the number reported
> to the guest via CPUID).
> 
> If the guest intentionally creates a PTE referencing a physical address
> between 39 and 44 bits, the result should be #PF[RSVD] for using the virtual
> address.  However, the shadow code accepts the PTE, shadows it, and the
> virtual address works normally.
> 
> Introduce paging_max_paddr_bits() to calculate the largest guest physical
> address supportable by the paging infrastructure, and update
> recalculate_cpuid_policy() to take this into account when clamping the guests
> cpuid_policy to reality.
> 
> There is an existing gfn_valid() in guest_pt.h but it is unused in the
> codebase.  Repurpose it to perform a guest-specific maxphysaddr check, which
> replaces the users of gfn_bits.
> 
> Signed-off-by: Andrew Cooper 

Acked-by: George Dunlap 

> ---
> CC: Jan Beulich 
> CC: Tim Deegan 
> CC: George Dunlap 
> CC: Jun Nakajima 
> CC: Kevin Tian 
> 
> v3:
>  * Retain pse36 maxphysaddr logic.
>  * Repurpose gfn_valid().
> 
> v2:
>  * Introduce paging_max_paddr_bits() rather than moving paging logic into
>recalculate_cpuid_policy().
>  * Rewrite half of the commit message.
> ---
>  xen/arch/x86/cpuid.c|  3 ++-
>  xen/arch/x86/hvm/vmx/vvmx.c |  3 +--
>  xen/arch/x86/mm/guest_walk.c|  3 +--
>  xen/arch/x86/mm/hap/hap.c   |  2 --
>  xen/arch/x86/mm/p2m.c   |  2 +-
>  xen/arch/x86/mm/shadow/common.c | 10 --
>  xen/arch/x86/mm/shadow/multi.c  |  2 +-
>  xen/include/asm-x86/domain.h|  3 ---
>  xen/include/asm-x86/guest_pt.h  |  6 --
>  xen/include/asm-x86/paging.h| 21 +
>  10 files changed, 27 insertions(+), 28 deletions(-)
> 
> diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
> index e0a387e..07d24da 100644
> --- a/xen/arch/x86/cpuid.c
> +++ b/xen/arch/x86/cpuid.c
> @@ -6,6 +6,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  #include 
>  #include 
>  
> @@ -504,7 +505,7 @@ void recalculate_cpuid_policy(struct domain *d)
>  
>  p->extd.maxphysaddr = min(p->extd.maxphysaddr, max->extd.maxphysaddr);
>  p->extd.maxphysaddr = min_t(uint8_t, p->extd.maxphysaddr,
> -d->arch.paging.gfn_bits + PAGE_SHIFT);
> +paging_max_paddr_bits(d));
>  p->extd.maxphysaddr = max_t(uint8_t, p->extd.maxphysaddr,
>  (p->basic.pae || p->basic.pse36) ? 36 : 32);
>  
> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> index f6a25a6..74775dd 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -1420,8 +1420,7 @@ int nvmx_handle_vmxon(struct cpu_user_regs *regs)
>  return X86EMUL_OKAY;
>  }
>  
> -if ( (gpa & ~PAGE_MASK) ||
> - (gpa >> (v->domain->arch.paging.gfn_bits + PAGE_SHIFT)) )
> +if ( (gpa & ~PAGE_MASK) || !gfn_valid(v->domain, _gfn(gpa >> 
> PAGE_SHIFT)) )
>  {
>  vmfail_invalid(regs);
>  return X86EMUL_OKAY;
> diff --git a/xen/arch/x86/mm/guest_walk.c b/xen/arch/x86/mm/guest_walk.c
> index a67fd5a..faaf70c 100644
> --- a/xen/arch/x86/mm/guest_walk.c
> +++ b/xen/arch/x86/mm/guest_walk.c
> @@ -434,8 +434,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
>  
>  /* If this guest has a restricted physical address space then the
>   * target GFN must fit within it. */
> -if ( !(rc & _PAGE_PRESENT)
> - && gfn_x(guest_l1e_get_gfn(gw->l1e)) >> d->arch.paging.gfn_bits )
> +if ( !(rc & _PAGE_PRESENT) && !gfn_valid(d, guest_l1e_get_gfn(gw->l1e)) )
>  rc |= _PAGE_INVALID_BITS;
>  
>  return rc;
> diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
> index b5870bf..d7cd8da 100644
> --- a/xen/arch/x86/mm/hap/hap.c
> +++ b/xen/arch/x86/mm/hap/hap.c
> @@ -446,8 +446,6 @@ void hap_domain_init(struct domain *d)
>  {
>  INIT_PAGE_LIST_HEAD(>arch.paging.hap.freelist);
>  
> -d->arch.paging.gfn_bits = hap_paddr_bits - PAGE_SHIFT;
> -
>  /* Use HAP logdirty mechanism. */
>  paging_log_dirty_init(d, hap_enable_log_dirty,
>hap_disable_log_dirty,
> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> index 0c1820e..cf3d6b0 100644
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -1784,7 +1784,7 @@ void 

Re: [Xen-devel] [PATCH v3] x86/shadow: Correct guest behaviour when creating PTEs above maxphysaddr

2017-02-16 Thread Jan Beulich
>>> On 16.02.17 at 16:45,  wrote:
> XSA-173 (c/s 8b1764833) introduces gfn_bits, and an upper limit which might be
> lower than the real maxphysaddr, to avoid overflowing the superpage shadow
> backpointer.
> 
> However, plenty of hardware has a physical address width less that 44 bits,
> and the code added in shadow_domain_init() is a straight assignment.  This
> causes gfn_bits to be increased beyond the physical address width on most
> Intel consumer hardware (typically a width of 39, which is the number reported
> to the guest via CPUID).
> 
> If the guest intentionally creates a PTE referencing a physical address
> between 39 and 44 bits, the result should be #PF[RSVD] for using the virtual
> address.  However, the shadow code accepts the PTE, shadows it, and the
> virtual address works normally.
> 
> Introduce paging_max_paddr_bits() to calculate the largest guest physical
> address supportable by the paging infrastructure, and update
> recalculate_cpuid_policy() to take this into account when clamping the guests
> cpuid_policy to reality.
> 
> There is an existing gfn_valid() in guest_pt.h but it is unused in the
> codebase.  Repurpose it to perform a guest-specific maxphysaddr check, which
> replaces the users of gfn_bits.
> 
> Signed-off-by: Andrew Cooper 

Reviewed-by: Jan Beulich 



___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel