[EMAIL PROTECTED] wrote:
> From: Ben-Ami Yassour <[EMAIL PROTECTED]>
>
> Signed-off-by: Ben-Ami Yassour <[EMAIL PROTECTED]>
> Signed-off-by: Muli Ben-Yehuda <[EMAIL PROTECTED]>
> ---
>  arch/x86/kvm/mmu.c         |   59 +++++++++++++++++++++++++++++--------------
>  arch/x86/kvm/paging_tmpl.h |   19 +++++++++----
>  include/linux/kvm_host.h   |    2 +-
>  virt/kvm/kvm_main.c        |   17 +++++++++++-
>  4 files changed, 69 insertions(+), 28 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 078a7f1..c89029d 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -112,6 +112,8 @@ static int dbg = 1;
>  #define PT_FIRST_AVAIL_BITS_SHIFT 9
>  #define PT64_SECOND_AVAIL_BITS_SHIFT 52
>  
> +#define PT_SHADOW_IO_MARK (1ULL << PT_FIRST_AVAIL_BITS_SHIFT)
> +
>   

Please rename this PT_SHADOW_MMIO_MASK.


>  #define VALID_PAGE(x) ((x) != INVALID_PAGE)
>  
>  #define PT64_LEVEL_BITS 9
> @@ -237,6 +239,9 @@ static int is_dirty_pte(unsigned long pte)
>  
>  static int is_rmap_pte(u64 pte)
>  {
> +     if (pte & PT_SHADOW_IO_MARK)
> +             return false;
> +
>       return is_shadow_present_pte(pte);
>  }
>   

Why avoid rmap on mmio pages?  Sure it's unnecessary work, but having 
less cases improves overall reliability.

You can use pfn_valid() in gfn_to_pfn() and kvm_release_pfn_*() to 
conditionally update the page refcounts.

-- 
Any sufficiently difficult bug is indistinguishable from a feature.


-------------------------------------------------------------------------
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
_______________________________________________
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel

Reply via email to