On 6/22/2011 3:21 AM, Chris Wright wrote:
* Nai Xia (nai@gmail.com) wrote:
Introduced kvm_mmu_notifier_test_and_clear_dirty(),
kvm_mmu_notifier_dirty_update()
and their mmu_notifier interfaces to support KSM dirty bit tracking, which
brings
significant performance gain in volatile pages
On Wednesday 22 June 2011 14:15:51 Izik Eidus wrote:
On 6/22/2011 3:21 AM, Chris Wright wrote:
* Nai Xia (nai@gmail.com) wrote:
Introduced kvm_mmu_notifier_test_and_clear_dirty(),
kvm_mmu_notifier_dirty_update()
and their mmu_notifier interfaces to support KSM dirty bit tracking,
On 06/21/2011 04:32 PM, Nai Xia wrote:
Introduced kvm_mmu_notifier_test_and_clear_dirty(),
kvm_mmu_notifier_dirty_update()
and their mmu_notifier interfaces to support KSM dirty bit tracking, which
brings
significant performance gain in volatile pages scanning in KSM.
Currently,
On 6/22/2011 1:43 PM, Avi Kivity wrote:
On 06/21/2011 04:32 PM, Nai Xia wrote:
Introduced kvm_mmu_notifier_test_and_clear_dirty(),
kvm_mmu_notifier_dirty_update()
and their mmu_notifier interfaces to support KSM dirty bit tracking,
which brings
significant performance gain in volatile pages
On 06/22/2011 02:05 PM, Izik Eidus wrote:
+spte = rmap_next(kvm, rmapp, NULL);
+while (spte) {
+int _dirty;
+u64 _spte = *spte;
+BUG_ON(!(_spte PT_PRESENT_MASK));
+_dirty = _spte PT_DIRTY_MASK;
+if (_dirty) {
+dirty = 1;
+
On 6/22/2011 2:10 PM, Avi Kivity wrote:
On 06/22/2011 02:05 PM, Izik Eidus wrote:
+spte = rmap_next(kvm, rmapp, NULL);
+while (spte) {
+int _dirty;
+u64 _spte = *spte;
+BUG_ON(!(_spte PT_PRESENT_MASK));
+_dirty = _spte PT_DIRTY_MASK;
+if
On 06/22/2011 02:19 PM, Izik Eidus wrote:
On 6/22/2011 2:10 PM, Avi Kivity wrote:
On 06/22/2011 02:05 PM, Izik Eidus wrote:
+spte = rmap_next(kvm, rmapp, NULL);
+while (spte) {
+int _dirty;
+u64 _spte = *spte;
+BUG_ON(!(_spte PT_PRESENT_MASK));
+_dirty
On 06/22/2011 02:24 PM, Avi Kivity wrote:
On 06/22/2011 02:19 PM, Izik Eidus wrote:
On 6/22/2011 2:10 PM, Avi Kivity wrote:
On 06/22/2011 02:05 PM, Izik Eidus wrote:
+spte = rmap_next(kvm, rmapp, NULL);
+while (spte) {
+int _dirty;
+u64 _spte = *spte;
+
On 06/22/2011 02:28 PM, Avi Kivity wrote:
Actually, this is dangerous. If we use the dirty bit for other
things, we will get data corruption.
For example we might want to map clean host pages as writeable-clean
in the spte on a read fault so that we don't get a page fault when
they get
On Wednesday 22 June 2011 19:28:08 Avi Kivity wrote:
On 06/22/2011 02:24 PM, Avi Kivity wrote:
On 06/22/2011 02:19 PM, Izik Eidus wrote:
On 6/22/2011 2:10 PM, Avi Kivity wrote:
On 06/22/2011 02:05 PM, Izik Eidus wrote:
+spte = rmap_next(kvm, rmapp, NULL);
+while (spte) {
+
On 6/22/2011 2:33 PM, Nai Xia wrote:
On Wednesday 22 June 2011 19:28:08 Avi Kivity wrote:
On 06/22/2011 02:24 PM, Avi Kivity wrote:
On 06/22/2011 02:19 PM, Izik Eidus wrote:
On 6/22/2011 2:10 PM, Avi Kivity wrote:
On 06/22/2011 02:05 PM, Izik Eidus wrote:
+spte = rmap_next(kvm, rmapp,
On Tue, Jun 21, 2011 at 09:32:39PM +0800, Nai Xia wrote:
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index d48ec60..b407a69 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -4674,6 +4674,7 @@ static int __init vmx_init(void)
kvm_mmu_set_mask_ptes(0ull, 0ull,
If we don't flush the smp tlb don't we risk that we'll insert pages in
the unstable tree that are volatile just because the dirty bit didn't
get set again on the spte?
Yes, this is the trade off we take, the unstable tree will be flushed
anyway -
so this is nothing that won`t be recovered
On 06/22/2011 07:19 AM, Izik Eidus wrote:
So what we say here is: it is better to have little junk in the unstable
tree that get flushed eventualy anyway, instead of make the guest
slower
this race is something that does not reflect accurate of ksm anyway due
to the full memcmp that we will
* Izik Eidus (izik.ei...@ravellosystems.com) wrote:
On 6/22/2011 3:21 AM, Chris Wright wrote:
* Nai Xia (nai@gmail.com) wrote:
+ if (!shadow_dirty_mask) {
+ WARN(1, KVM: do NOT try to test dirty bit in EPT\n);
+ goto out;
+ }
This should never fire with the
On Wed, Jun 22, 2011 at 11:39:40AM -0400, Rik van Riel wrote:
On 06/22/2011 07:19 AM, Izik Eidus wrote:
So what we say here is: it is better to have little junk in the unstable
tree that get flushed eventualy anyway, instead of make the guest
slower
this race is something that does
On Wed, Jun 22, 2011 at 11:39 PM, Rik van Riel r...@redhat.com wrote:
On 06/22/2011 07:19 AM, Izik Eidus wrote:
So what we say here is: it is better to have little junk in the unstable
tree that get flushed eventualy anyway, instead of make the guest
slower
this race is something that
On Wed, Jun 22, 2011 at 11:03 PM, Andrea Arcangeli aarca...@redhat.com wrote:
On Tue, Jun 21, 2011 at 09:32:39PM +0800, Nai Xia wrote:
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index d48ec60..b407a69 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -4674,6 +4674,7 @@
On Thu, Jun 23, 2011 at 07:13:54AM +0800, Nai Xia wrote:
I agree on this point. Dirty bit , young bit, is by no means accurate. Even
on 4kB pages, there is always a chance that the pte are dirty but the contents
are actually the same. Yeah, the whole optimization contains trade-offs and
Just a
On 06/22/2011 07:13 PM, Nai Xia wrote:
On Wed, Jun 22, 2011 at 11:39 PM, Rik van Rielr...@redhat.com wrote:
On 06/22/2011 07:19 AM, Izik Eidus wrote:
So what we say here is: it is better to have little junk in the unstable
tree that get flushed eventualy anyway, instead of make the guest
On Thu, Jun 23, 2011 at 12:55 AM, Andrea Arcangeli aarca...@redhat.com wrote:
On Wed, Jun 22, 2011 at 11:39:40AM -0400, Rik van Riel wrote:
On 06/22/2011 07:19 AM, Izik Eidus wrote:
So what we say here is: it is better to have little junk in the unstable
tree that get flushed eventualy
On Wed, Jun 22, 2011 at 11:03 PM, Andrea Arcangeli aarca...@redhat.com wrote:
On Tue, Jun 21, 2011 at 09:32:39PM +0800, Nai Xia wrote:
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index d48ec60..b407a69 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -4674,6 +4674,7 @@
On Thu, Jun 23, 2011 at 07:19:06AM +0800, Nai Xia wrote:
OK, I'll have a try over other workarounds.
I am not feeling good about need_pte_unmap myself. :-)
The usual way is to check VM_HUGETLB in the caller and to call another
function that doesn't kmap. Casting pmd_t to pte_t isn't really nice
On Thu, Jun 23, 2011 at 07:37:47AM +0800, Nai Xia wrote:
On 2MB pages, I'd like to remind you and Rik that ksmd currently splits
huge pages before their sub pages gets really merged to stable tree.
So when there are many 2MB pages each having a 4kB subpage
changed for all time, this is already
On 06/22/2011 07:37 PM, Nai Xia wrote:
On 2MB pages, I'd like to remind you and Rik that ksmd currently splits
huge pages before their sub pages gets really merged to stable tree.
Your proposal appears to add a condition that causes ksmd to skip
doing that, which can cause the system to start
On Thu, Jun 23, 2011 at 7:44 AM, Andrea Arcangeli aarca...@redhat.com wrote:
On Thu, Jun 23, 2011 at 07:19:06AM +0800, Nai Xia wrote:
OK, I'll have a try over other workarounds.
I am not feeling good about need_pte_unmap myself. :-)
The usual way is to check VM_HUGETLB in the caller and to
On Thu, Jun 23, 2011 at 7:59 AM, Andrea Arcangeli aarca...@redhat.com wrote:
On Thu, Jun 23, 2011 at 07:37:47AM +0800, Nai Xia wrote:
On 2MB pages, I'd like to remind you and Rik that ksmd currently splits
huge pages before their sub pages gets really merged to stable tree.
So when there are
On Thu, Jun 23, 2011 at 8:00 AM, Rik van Riel r...@redhat.com wrote:
On 06/22/2011 07:37 PM, Nai Xia wrote:
On 2MB pages, I'd like to remind you and Rik that ksmd currently splits
huge pages before their sub pages gets really merged to stable tree.
Your proposal appears to add a condition
On Thu, Jun 23, 2011 at 08:31:56AM +0800, Nai Xia wrote:
On Thu, Jun 23, 2011 at 7:59 AM, Andrea Arcangeli aarca...@redhat.com wrote:
On Thu, Jun 23, 2011 at 07:37:47AM +0800, Nai Xia wrote:
On 2MB pages, I'd like to remind you and Rik that ksmd currently splits
huge pages before their sub
On Thu, Jun 23, 2011 at 7:28 AM, Rik van Riel r...@redhat.com wrote:
On 06/22/2011 07:13 PM, Nai Xia wrote:
On Wed, Jun 22, 2011 at 11:39 PM, Rik van Rielr...@redhat.com wrote:
On 06/22/2011 07:19 AM, Izik Eidus wrote:
So what we say here is: it is better to have little junk in the
On Thu, Jun 23, 2011 at 7:25 AM, Andrea Arcangeli aarca...@redhat.com wrote:
On Thu, Jun 23, 2011 at 07:13:54AM +0800, Nai Xia wrote:
I agree on this point. Dirty bit , young bit, is by no means accurate. Even
on 4kB pages, there is always a chance that the pte are dirty but the
contents
are
On Thu, Jun 23, 2011 at 8:44 AM, Andrea Arcangeli aarca...@redhat.com wrote:
On Thu, Jun 23, 2011 at 08:31:56AM +0800, Nai Xia wrote:
On Thu, Jun 23, 2011 at 7:59 AM, Andrea Arcangeli aarca...@redhat.com
wrote:
On Thu, Jun 23, 2011 at 07:37:47AM +0800, Nai Xia wrote:
On 2MB pages, I'd like
Introduced kvm_mmu_notifier_test_and_clear_dirty(),
kvm_mmu_notifier_dirty_update()
and their mmu_notifier interfaces to support KSM dirty bit tracking, which
brings
significant performance gain in volatile pages scanning in KSM.
Currently, kvm_mmu_notifier_dirty_update() returns 0 if and only
* Nai Xia (nai@gmail.com) wrote:
Introduced kvm_mmu_notifier_test_and_clear_dirty(),
kvm_mmu_notifier_dirty_update()
and their mmu_notifier interfaces to support KSM dirty bit tracking, which
brings
significant performance gain in volatile pages scanning in KSM.
Currently,
On Wednesday 22 June 2011 08:21:23 Chris Wright wrote:
* Nai Xia (nai@gmail.com) wrote:
Introduced kvm_mmu_notifier_test_and_clear_dirty(),
kvm_mmu_notifier_dirty_update()
and their mmu_notifier interfaces to support KSM dirty bit tracking, which
brings
significant performance
35 matches
Mail list logo