On 07/09/2015 11:18 PM, Paolo Bonzini wrote:
On 09/07/2015 04:30, Xiao Guangrong wrote:
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 602b974a60a6..0f125c1860ec 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1085,6 +1085,47 @@ static u64 svm_compute_tsc_offset
On 07/08/2015 07:19 PM, Paolo Bonzini wrote:
On 08/07/2015 07:59, Xiao Guangrong wrote:
On 07/07/2015 09:45 PM, Paolo Bonzini wrote:
Right now, NPT page attributes are not used, and the final page
attribute depends solely on gPAT (which however is not synced
correctly), the guest MTRRs
On 07/07/2015 09:45 PM, Paolo Bonzini wrote:
Right now, NPT page attributes are not used, and the final page
attribute depends solely on gPAT (which however is not synced
correctly), the guest MTRRs and the guest page attributes.
However, we can do better by mimicking what is done for VMX.
In
On 07/07/2015 09:45 PM, Paolo Bonzini wrote:
Right now, NPT page attributes are not used, and the final page
attribute depends solely on gPAT (which however is not synced
correctly), the guest MTRRs and the guest page attributes.
However, we can do better by mimicking what is done for VMX.
In
On 07/08/2015 07:19 PM, Paolo Bonzini wrote:
On 08/07/2015 07:59, Xiao Guangrong wrote:
On 07/07/2015 09:45 PM, Paolo Bonzini wrote:
Right now, NPT page attributes are not used, and the final page
attribute depends solely on gPAT (which however is not synced
correctly), the guest MTRRs
On 06/23/2015 04:00 PM, Paolo Bonzini wrote:
On 23/06/2015 04:29, Xiao Guangrong wrote:
If so, can you look at kvm/queue and see if it is okay for you (so that
we can get the series in 4.2)?
Ping?
If I don't get testing results before Wednesday, I'll drop this series
from the 4.2 pull
On 06/23/2015 04:00 PM, Paolo Bonzini wrote:
On 23/06/2015 04:29, Xiao Guangrong wrote:
If so, can you look at kvm/queue and see if it is okay for you (so that
we can get the series in 4.2)?
Ping?
If I don't get testing results before Wednesday, I'll drop this series
from the 4.2 pull
On 06/22/2015 07:24 PM, Paolo Bonzini wrote:
On 17/06/2015 18:11, Paolo Bonzini wrote:
Also, this loop looks weird. Is this what you wanted?
list_for_each_entry(tmp, _state->head, node)
if (cur->base >= tmp->base)
break;
On 06/22/2015 07:24 PM, Paolo Bonzini wrote:
On 17/06/2015 18:11, Paolo Bonzini wrote:
Also, this loop looks weird. Is this what you wanted?
list_for_each_entry(tmp, mtrr_state-head, node)
if (cur-base = tmp-base)
break;
On 06/17/2015 04:18 PM, Paolo Bonzini wrote:
On 09/06/2015 06:01, Xiao Guangrong wrote:
On 05/28/2015 01:05 AM, Paolo Bonzini wrote:
This is now very simple to do. The only interesting part is a simple
trick to find the right memslot in gfn_to_rmap, retrieving the address
space from
On 06/17/2015 04:15 PM, Paolo Bonzini wrote:
On 09/06/2015 05:28, Xiao Guangrong wrote:
-rmapp = gfn_to_rmap(kvm, sp->gfn, PT_PAGE_TABLE_LEVEL);
+slots = kvm_memslots(kvm);
+slot = __gfn_to_memslot(slots, sp->gfn);
+rmapp = __gfn_to_rmap(sp->gfn, PT_PAGE_TABLE_LE
On 06/17/2015 04:18 PM, Paolo Bonzini wrote:
On 09/06/2015 06:01, Xiao Guangrong wrote:
On 05/28/2015 01:05 AM, Paolo Bonzini wrote:
This is now very simple to do. The only interesting part is a simple
trick to find the right memslot in gfn_to_rmap, retrieving the address
space from
On 06/17/2015 04:15 PM, Paolo Bonzini wrote:
On 09/06/2015 05:28, Xiao Guangrong wrote:
-rmapp = gfn_to_rmap(kvm, sp-gfn, PT_PAGE_TABLE_LEVEL);
+slots = kvm_memslots(kvm);
+slot = __gfn_to_memslot(slots, sp-gfn);
+rmapp = __gfn_to_rmap(sp-gfn, PT_PAGE_TABLE_LEVEL, slot
Only KVM_NR_VAR_MTRR variable MTRRs are available in KVM guest
Signed-off-by: Xiao Guangrong
---
arch/x86/include/asm/kvm_host.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index cbf9f07..fe9cbe4 100644
vMTRR does not depend on any host MTRR feature and fixed MTRRs have always
been implemented, so drop this field
Signed-off-by: Xiao Guangrong
---
arch/x86/include/asm/kvm_host.h | 9 -
arch/x86/kvm/mtrr.c | 7 +++
arch/x86/kvm/x86.c | 1 -
3 files changed
as it expected
This patchset fixes these bugs and also do optimization and cleanups.
Xiao Guangrong (15):
KVM: x86: fix CR0.CD virtualization
KVM: x86: move MTRR related code to a separate file
KVM: MTRR: handle MSR_MTRRcap in kvm_mtrr_get_msr
KVM: MTRR: remove mtrr_state.have_fixed
KVM: MTRR
MTRR code locates in x86.c and mmu.c so that move them to a separate file to
make the organization more clearer and it will be the place where we fully
implement vMTRR
Signed-off-by: Xiao Guangrong
---
arch/x86/include/asm/kvm_host.h | 1 -
arch/x86/kvm/Makefile | 2 +-
arch/x86
MSR_MTRRcap is a MTRR msr so move the handler to the common place, also
add some comments to make the hard code more readable
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mtrr.c | 12
arch/x86/kvm/x86.c | 2 --
2 files changed, 12 insertions(+), 2 deletions(-)
diff --git
Two functions are introduced:
- fixed_mtrr_addr_to_seg() translates the address to the fixed
MTRR segment
- fixed_mtrr_addr_seg_to_range_index() translates the address to
the index of kvm_mtrr.fixed_ranges[]
They will be used in the later patch
Signed-off-by: Xiao Guangrong
---
arch/x86
It gets the range for the specified variable MTRR
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mtrr.c | 19 +--
1 file changed, 13 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mtrr.c b/arch/x86/kvm/mtrr.c
index df73149..cb9702d 100644
--- a/arch/x86/kvm/mtrr.c
Sort all valid variable MTRRs based on its base address, it will help us to
check a range to see if it's fully contained in variable MTRRs
Signed-off-by: Xiao Guangrong
---
arch/x86/include/asm/kvm_host.h | 3 ++
arch/x86/kvm/mtrr.c | 63
This table summarizes the information of fixed MTRRs and introduce some APIs
to abstract its operation which helps us to clean up the code and will be
used in later patches
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mtrr.c | 200 ++--
1 file
Currently, CR0.CD is not checked when we virtualize memory cache type for
noncoherent_dma guests, this patch fixes it by :
- setting UC for all memory if CR0.CD = 1
- zapping all the last sptes in MMU if CR0.CD is changed
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/vmx.c | 32
Based on Intel's SDM, mapping huge page which do not have consistent
memory cache for each 4k page will cause undefined behavior
In order to avoiding this kind of undefined behavior, we force to use
4k pages under this case
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 20
mtrr_for_each_mem_type() is ready now, use it to simplify
kvm_mtrr_get_guest_memory_type()
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mtrr.c | 64 ++---
1 file changed, 16 insertions(+), 48 deletions(-)
diff --git a/arch/x86/kvm/mtrr.c b/arch
It walks all MTRRs and gets all the memory cache type setting for the
specified range also it checks if the range is fully covered by MTRRs
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mtrr.c | 188
1 file changed, 188 insertions(+)
diff
- kvm_mtrr_get_guest_memory_type() only checks one page in MTRRs so
that it's unnecessary to check to see if the range is partially
covered in MTRR
- optimize the check of overlap memory type and add some comments
to explain the precedence
Signed-off-by: Xiao Guangrong
---
arch/x86
Variable MTRR MSRs are 64 bits which are directly accessed with full length,
no reason to split them to two 32 bits
Signed-off-by: Xiao Guangrong
---
arch/x86/include/asm/kvm_host.h | 7 ++-
arch/x86/kvm/mtrr.c | 32 ++--
2 files changed, 16
Drop kvm_mtrr->enable, omit the decode/code workload and get rid of
all the hard code
Signed-off-by: Xiao Guangrong
---
arch/x86/include/asm/kvm_host.h | 3 +--
arch/x86/kvm/mtrr.c | 40
2 files changed, 29 insertions(+), 14 deleti
This table summarizes the information of fixed MTRRs and introduce some APIs
to abstract its operation which helps us to clean up the code and will be
used in later patches
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/kvm/mtrr.c | 200
Two functions are introduced:
- fixed_mtrr_addr_to_seg() translates the address to the fixed
MTRR segment
- fixed_mtrr_addr_seg_to_range_index() translates the address to
the index of kvm_mtrr.fixed_ranges[]
They will be used in the later patch
Signed-off-by: Xiao Guangrong guangrong.x
It gets the range for the specified variable MTRR
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/kvm/mtrr.c | 19 +--
1 file changed, 13 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mtrr.c b/arch/x86/kvm/mtrr.c
index df73149..cb9702d 100644
Sort all valid variable MTRRs based on its base address, it will help us to
check a range to see if it's fully contained in variable MTRRs
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/include/asm/kvm_host.h | 3 ++
arch/x86/kvm/mtrr.c | 63
MTRR code locates in x86.c and mmu.c so that move them to a separate file to
make the organization more clearer and it will be the place where we fully
implement vMTRR
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/include/asm/kvm_host.h | 1 -
arch/x86/kvm/Makefile
Only KVM_NR_VAR_MTRR variable MTRRs are available in KVM guest
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/include/asm/kvm_host.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
vMTRR does not depend on any host MTRR feature and fixed MTRRs have always
been implemented, so drop this field
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/include/asm/kvm_host.h | 9 -
arch/x86/kvm/mtrr.c | 7 +++
arch/x86/kvm/x86.c
MSR_MTRRcap is a MTRR msr so move the handler to the common place, also
add some comments to make the hard code more readable
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/kvm/mtrr.c | 12
arch/x86/kvm/x86.c | 2 --
2 files changed, 12 insertions(+), 2
as it expected
This patchset fixes these bugs and also do optimization and cleanups.
Xiao Guangrong (15):
KVM: x86: fix CR0.CD virtualization
KVM: x86: move MTRR related code to a separate file
KVM: MTRR: handle MSR_MTRRcap in kvm_mtrr_get_msr
KVM: MTRR: remove mtrr_state.have_fixed
KVM: MTRR
Variable MTRR MSRs are 64 bits which are directly accessed with full length,
no reason to split them to two 32 bits
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/include/asm/kvm_host.h | 7 ++-
arch/x86/kvm/mtrr.c | 32
Based on Intel's SDM, mapping huge page which do not have consistent
memory cache for each 4k page will cause undefined behavior
In order to avoiding this kind of undefined behavior, we force to use
4k pages under this case
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch
Drop kvm_mtrr-enable, omit the decode/code workload and get rid of
all the hard code
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/include/asm/kvm_host.h | 3 +--
arch/x86/kvm/mtrr.c | 40
2 files changed, 29
mtrr_for_each_mem_type() is ready now, use it to simplify
kvm_mtrr_get_guest_memory_type()
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/kvm/mtrr.c | 64 ++---
1 file changed, 16 insertions(+), 48 deletions(-)
diff
It walks all MTRRs and gets all the memory cache type setting for the
specified range also it checks if the range is fully covered by MTRRs
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/kvm/mtrr.c | 188
1 file
Currently, CR0.CD is not checked when we virtualize memory cache type for
noncoherent_dma guests, this patch fixes it by :
- setting UC for all memory if CR0.CD = 1
- zapping all the last sptes in MMU if CR0.CD is changed
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86
- kvm_mtrr_get_guest_memory_type() only checks one page in MTRRs so
that it's unnecessary to check to see if the range is partially
covered in MTRR
- optimize the check of overlap memory type and add some comments
to explain the precedence
Signed-off-by: Xiao Guangrong guangrong.x
ks good to me:
Reviewed-by: Xiao Guangrong
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
introduced in order to
operate on all address spaces when adding or deleting private
memory slots.
Reviewed-by: Xiao Guangrong
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.
* to the newly-introduced
kvm_vcpu_*, which call into kvm_arch_vcpu_memslots_id.
Reviewed-by: Xiao Guangrong
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo
On 05/28/2015 01:05 AM, Paolo Bonzini wrote:
This is always available (with one exception in the auditing code).
Later we will also use the role to look up the right memslots array.
return;
@@ -191,11 +191,15 @@ static void audit_write_protection(struct kvm *kvm,
t these decoding into a separate function?
Nice idea indeed.
Reviewed-by: Xiao Guangrong
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Pl
On 06/09/2015 08:36 AM, David Matlack wrote:
On Sat, May 30, 2015 at 3:59 AM, Xiao Guangrong
wrote:
It walks all MTRRs and gets all the memory cache type setting for the
specified range also it checks if the range is fully covered by MTRRs
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm
Thanks for your review, David!
On 06/09/2015 08:36 AM, David Matlack wrote:
static void update_mtrr(struct kvm_vcpu *vcpu, u32 msr)
{
struct kvm_mtrr *mtrr_state = >arch.mtrr_state;
- gfn_t start, end, mask;
+ gfn_t start, end;
int index;
if (msr
Thanks for your review, David!
On 06/09/2015 08:36 AM, David Matlack wrote:
static void update_mtrr(struct kvm_vcpu *vcpu, u32 msr)
{
struct kvm_mtrr *mtrr_state = vcpu-arch.mtrr_state;
- gfn_t start, end, mask;
+ gfn_t start, end;
int index;
if
On 06/09/2015 08:36 AM, David Matlack wrote:
On Sat, May 30, 2015 at 3:59 AM, Xiao Guangrong
guangrong.x...@linux.intel.com wrote:
It walks all MTRRs and gets all the memory cache type setting for the
specified range also it checks if the range is fully covered by MTRRs
Signed-off-by: Xiao
On 05/28/2015 01:05 AM, Paolo Bonzini wrote:
This is always available (with one exception in the auditing code).
Later we will also use the role to look up the right memslots array.
return;
@@ -191,11 +191,15 @@ static void audit_write_protection(struct kvm *kvm,
* to the newly-introduced
kvm_vcpu_*, which call into kvm_arch_vcpu_memslots_id.
Reviewed-by: Xiao Guangrong guangrong.x...@linux.intel.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http
into a separate function?
Nice idea indeed.
Reviewed-by: Xiao Guangrong guangrong.x...@linux.intel.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please
to
operate on all address spaces when adding or deleting private
memory slots.
Reviewed-by: Xiao Guangrong guangrong.x...@linux.intel.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http
good to me:
Reviewed-by: Xiao Guangrong guangrong.x...@linux.intel.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http
[ CCed Zhang Yang ]
On 06/04/2015 04:36 PM, Paolo Bonzini wrote:
On 04/06/2015 10:23, Xiao Guangrong wrote:
So, why do you need to always use IPAT=0? Can patch 15 keep the current
logic for RAM, like this:
if (is_mmio || kvm_arch_has_noncoherent_dma(vcpu->kvm))
[ CCed Zhang Yang ]
On 06/04/2015 04:36 PM, Paolo Bonzini wrote:
On 04/06/2015 10:23, Xiao Guangrong wrote:
So, why do you need to always use IPAT=0? Can patch 15 keep the current
logic for RAM, like this:
if (is_mmio || kvm_arch_has_noncoherent_dma(vcpu-kvm))
ret
On 06/04/2015 04:23 PM, Xiao Guangrong wrote:
On 06/03/2015 03:55 PM, Paolo Bonzini wrote:
On 03/06/2015 04:56, Xiao Guangrong wrote:
On 06/01/2015 05:36 PM, Paolo Bonzini wrote:
On 30/05/2015 12:59, Xiao Guangrong wrote:
Currently guest MTRR is completely prohibited if cache snoop
On 06/03/2015 03:55 PM, Paolo Bonzini wrote:
On 03/06/2015 04:56, Xiao Guangrong wrote:
On 06/01/2015 05:36 PM, Paolo Bonzini wrote:
On 30/05/2015 12:59, Xiao Guangrong wrote:
Currently guest MTRR is completely prohibited if cache snoop is
supported on
IOMMU (!noncoherent_dma
On 06/03/2015 03:55 PM, Paolo Bonzini wrote:
On 03/06/2015 04:56, Xiao Guangrong wrote:
On 06/01/2015 05:36 PM, Paolo Bonzini wrote:
On 30/05/2015 12:59, Xiao Guangrong wrote:
Currently guest MTRR is completely prohibited if cache snoop is
supported on
IOMMU (!noncoherent_dma
On 06/04/2015 04:23 PM, Xiao Guangrong wrote:
On 06/03/2015 03:55 PM, Paolo Bonzini wrote:
On 03/06/2015 04:56, Xiao Guangrong wrote:
On 06/01/2015 05:36 PM, Paolo Bonzini wrote:
On 30/05/2015 12:59, Xiao Guangrong wrote:
Currently guest MTRR is completely prohibited if cache snoop
On 06/01/2015 10:26 PM, Paolo Bonzini wrote:
On 01/06/2015 11:33, Paolo Bonzini wrote:
+ looker->mem_type = looker->mtrr_state->fixed_ranges[index];
+ looker->start = fixed_mtrr_range_end_addr(seg, index);
+ return true;
in mtrr_lookup_fixed_start is the same as this:
On 06/01/2015 05:36 PM, Paolo Bonzini wrote:
On 30/05/2015 12:59, Xiao Guangrong wrote:
Currently guest MTRR is completely prohibited if cache snoop is supported on
IOMMU (!noncoherent_dma) and host does the emulation based on the knowledge
from host side, however, host side is not the good
On 06/01/2015 05:33 PM, Paolo Bonzini wrote:
On 30/05/2015 12:59, Xiao Guangrong wrote:
+struct mtrr_looker {
+ /* input fields. */
+ struct kvm_mtrr *mtrr_state;
+ u64 start;
+ u64 end;
s/looker/iter/ or s/looker/lookup/
Good to me.
+static void
On 06/01/2015 05:27 PM, Paolo Bonzini wrote:
On 30/05/2015 12:59, Xiao Guangrong wrote:
Sort all valid variable MTRRs based on its base address, it will help us to
check a range to see if it's fully contained in variable MTRRs
Signed-off-by: Xiao Guangrong
---
arch/x86/include/asm
On 06/01/2015 05:25 PM, Paolo Bonzini wrote:
On 30/05/2015 12:59, Xiao Guangrong wrote:
This table summarizes the information of fixed MTRRs and introduce some APIs
to abstract its operation which helps us to clean up the code and will be
used in later patches
Signed-off-by: Xiao Guangrong
On 06/01/2015 05:16 PM, Paolo Bonzini wrote:
On 30/05/2015 12:59, Xiao Guangrong wrote:
- kvm_mtrr_get_guest_memory_type() only checks one page in MTRRs so that
it's unnecessary to check to see if the range is partially covered in
MTRR
- optimize the check of overlap memory
Thanks for your review, Paolo!
On 06/01/2015 05:11 PM, Paolo Bonzini wrote:
struct kvm_vcpu_arch {
diff --git a/arch/x86/kvm/mtrr.c b/arch/x86/kvm/mtrr.c
index 562341b..6de49dd 100644
--- a/arch/x86/kvm/mtrr.c
+++ b/arch/x86/kvm/mtrr.c
@@ -105,7 +105,6 @@ EXPORT_SYMBOL_GPL(kvm_mtrr_valid);
Thanks for your review, Paolo!
On 06/01/2015 05:11 PM, Paolo Bonzini wrote:
struct kvm_vcpu_arch {
diff --git a/arch/x86/kvm/mtrr.c b/arch/x86/kvm/mtrr.c
index 562341b..6de49dd 100644
--- a/arch/x86/kvm/mtrr.c
+++ b/arch/x86/kvm/mtrr.c
@@ -105,7 +105,6 @@ EXPORT_SYMBOL_GPL(kvm_mtrr_valid);
On 06/01/2015 05:16 PM, Paolo Bonzini wrote:
On 30/05/2015 12:59, Xiao Guangrong wrote:
- kvm_mtrr_get_guest_memory_type() only checks one page in MTRRs so that
it's unnecessary to check to see if the range is partially covered in
MTRR
- optimize the check of overlap memory
On 06/01/2015 05:27 PM, Paolo Bonzini wrote:
On 30/05/2015 12:59, Xiao Guangrong wrote:
Sort all valid variable MTRRs based on its base address, it will help us to
check a range to see if it's fully contained in variable MTRRs
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
On 06/01/2015 05:25 PM, Paolo Bonzini wrote:
On 30/05/2015 12:59, Xiao Guangrong wrote:
This table summarizes the information of fixed MTRRs and introduce some APIs
to abstract its operation which helps us to clean up the code and will be
used in later patches
Signed-off-by: Xiao Guangrong
On 06/01/2015 05:33 PM, Paolo Bonzini wrote:
On 30/05/2015 12:59, Xiao Guangrong wrote:
+struct mtrr_looker {
+ /* input fields. */
+ struct kvm_mtrr *mtrr_state;
+ u64 start;
+ u64 end;
s/looker/iter/ or s/looker/lookup/
Good to me.
+static void
On 06/01/2015 10:26 PM, Paolo Bonzini wrote:
On 01/06/2015 11:33, Paolo Bonzini wrote:
+ looker-mem_type = looker-mtrr_state-fixed_ranges[index];
+ looker-start = fixed_mtrr_range_end_addr(seg, index);
+ return true;
in mtrr_lookup_fixed_start is the same as this:
+
On 06/01/2015 05:36 PM, Paolo Bonzini wrote:
On 30/05/2015 12:59, Xiao Guangrong wrote:
Currently guest MTRR is completely prohibited if cache snoop is supported on
IOMMU (!noncoherent_dma) and host does the emulation based on the knowledge
from host side, however, host side is not the good
MSR_MTRRcap is a MTRR msr so move the handler to the common place, also
add some comments to make the hard code more readable
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mtrr.c | 12
arch/x86/kvm/x86.c | 2 --
2 files changed, 12 insertions(+), 2 deletions(-)
diff --git
Variable MTRR MSRs are 64 bits which are directly accessed with full length,
no reason to split them to two 32 bits
Signed-off-by: Xiao Guangrong
---
arch/x86/include/asm/kvm_host.h | 7 ++-
arch/x86/kvm/mtrr.c | 32 ++--
2 files changed, 16
On 05/30/2015 06:59 PM, Xiao Guangrong wrote:
Currently guest MTRR is completely prohibited if cache snoop is supported on
IOMMU (!noncoherent_dma) and host does the emulation based on the knowledge
from host side, however, host side is not the good point to know
what the purpose of guest
It walks all MTRRs and gets all the memory cache type setting for the
specified range also it checks if the range is fully covered by MTRRs
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mtrr.c | 183
1 file changed, 183 insertions(+)
diff
MTRR code locates in x86.c and mmu.c so that move them to a separate file to
make the organization more clearer and it will be the place where we fully
implement vMTRR
Signed-off-by: Xiao Guangrong
---
arch/x86/include/asm/kvm_host.h | 2 +-
arch/x86/kvm/Makefile | 2 +-
arch/x86
Sort all valid variable MTRRs based on its base address, it will help us to
check a range to see if it's fully contained in variable MTRRs
Signed-off-by: Xiao Guangrong
---
arch/x86/include/asm/kvm_host.h | 3 +++
arch/x86/kvm/mtrr.c | 39
Two functions are introduced:
- fixed_mtrr_addr_to_seg() translates the address to the fixed
MTRR segment
- fixed_mtrr_addr_seg_to_range_index() translates the address to
the index of kvm_mtrr.fixed_ranges[]
They will be used in the later patch
Signed-off-by: Xiao Guangrong
---
arch/x86
mtrr_for_each_mem_type() is ready now, use it to simplify
kvm_mtrr_get_guest_memory_type()
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mtrr.c | 61 +
1 file changed, 15 insertions(+), 46 deletions(-)
diff --git a/arch/x86/kvm/mtrr.c b/arch
Only KVM_NR_VAR_MTRR variable MTRRs are available in KVM guest
Signed-off-by: Xiao Guangrong
---
arch/x86/include/asm/kvm_host.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 65de1b3..2c3c52d 100644
This table summarizes the information of fixed MTRRs and introduce some APIs
to abstract its operation which helps us to clean up the code and will be
used in later patches
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mtrr.c | 191 ++--
1 file
Based on Intel's SDM, mapping huge page which do not have consistent
memory cache for each 4k page will cause undefined behavior
In order to avoiding this kind of undefined behavior, we force to use
4k pages under this case
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 20
vMTRR does not depend on any host MTRR feature and fixed MTRRs have always
been implemented, so drop this field
Signed-off-by: Xiao Guangrong
---
arch/x86/include/asm/kvm_host.h | 9 -
arch/x86/kvm/mtrr.c | 7 +++
arch/x86/kvm/x86.c | 1 -
3 files changed
- kvm_mtrr_get_guest_memory_type() only checks one page in MTRRs so that
it's unnecessary to check to see if the range is partially covered in
MTRR
- optimize the check of overlap memory type and add some comments to explain
the precedence
Signed-off-by: Xiao Guangrong
---
arch/x86
It gets the range for the specified variable MTRR
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mtrr.c | 19 +--
1 file changed, 13 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mtrr.c b/arch/x86/kvm/mtrr.c
index 888441e..aeb9767 100644
--- a/arch/x86/kvm/mtrr.c
buffer is not always UC as host expected
This patchset enables full MTRR virtualization and currently only works on
Intel EPT architecture
Signed-off-by: Xiao Guangrong
---
arch/x86/include/asm/kvm_host.h | 2 +-
arch/x86/kvm/mmu.c | 3 +--
arch/x86/kvm/mtrr.c | 3
Use union definition to avoid the decode/code workload and drop all the
hard code
Signed-off-by: Xiao Guangrong
---
arch/x86/include/asm/kvm_host.h | 12 ++--
arch/x86/kvm/mtrr.c | 19 ---
2 files changed, 18 insertions(+), 13 deletions(-)
diff --git a/arch
regression is detected
Xiao Guangrong (15):
KVM: x86: move MTRR related code to a separate file
KVM: MTRR: handle MSR_MTRRcap in kvm_mtrr_get_msr
KVM: MTRR: remove mtrr_state.have_fixed
KVM: MTRR: exactly define the size of variable MTRRs
KVM: MTRR: clean up mtrr default type
KVM: MTRR
Use union definition to avoid the decode/code workload and drop all the
hard code
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/include/asm/kvm_host.h | 12 ++--
arch/x86/kvm/mtrr.c | 19 ---
2 files changed, 18 insertions(+), 13
regression is detected
Xiao Guangrong (15):
KVM: x86: move MTRR related code to a separate file
KVM: MTRR: handle MSR_MTRRcap in kvm_mtrr_get_msr
KVM: MTRR: remove mtrr_state.have_fixed
KVM: MTRR: exactly define the size of variable MTRRs
KVM: MTRR: clean up mtrr default type
KVM: MTRR
MSR_MTRRcap is a MTRR msr so move the handler to the common place, also
add some comments to make the hard code more readable
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/kvm/mtrr.c | 12
arch/x86/kvm/x86.c | 2 --
2 files changed, 12 insertions(+), 2
buffer is not always UC as host expected
This patchset enables full MTRR virtualization and currently only works on
Intel EPT architecture
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/include/asm/kvm_host.h | 2 +-
arch/x86/kvm/mmu.c | 3 +--
arch/x86/kvm
501 - 600 of 2152 matches
Mail list logo