On 12/24/2015 05:11 PM, Xiao Guangrong wrote:
On 12/24/2015 04:36 PM, Kai Huang wrote:
On 12/23/2015 07:25 PM, Xiao Guangrong wrote:
Now, all non-leaf shadow page are page tracked, if gfn is not tracked
there is no non-leaf shadow page of gfn is existed, we can directly
make the shadow
On 12/23/2015 07:25 PM, Xiao Guangrong wrote:
Now, all non-leaf shadow page are page tracked, if gfn is not tracked
there is no non-leaf shadow page of gfn is existed, we can directly
make the shadow page of gfn to unsync
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 26 ---
On 12/16/2015 04:48 PM, Xiao Guangrong wrote:
On 12/16/2015 04:05 PM, Kai Huang wrote:
On 12/15/2015 05:25 PM, Xiao Guangrong wrote:
On 12/15/2015 04:43 PM, Kai Huang wrote:
On 12/01/2015 02:26 AM, Xiao Guangrong wrote:
Now, all non-leaf shadow page are page tracked, if gfn is not
On 12/16/2015 04:39 PM, Xiao Guangrong wrote:
On 12/16/2015 03:51 PM, Kai Huang wrote:
On 12/15/2015 05:10 PM, Xiao Guangrong wrote:
On 12/15/2015 03:52 PM, Kai Huang wrote:
static bool __mmu_gfn_lpage_is_disallowed(gfn_t gfn, int level,
@@ -2140,12 +2150,18 @@ static struct
On 12/15/2015 05:25 PM, Xiao Guangrong wrote:
On 12/15/2015 04:43 PM, Kai Huang wrote:
On 12/01/2015 02:26 AM, Xiao Guangrong wrote:
Now, all non-leaf shadow page are page tracked, if gfn is not tracked
there is no non-leaf shadow page of gfn is existed, we can directly
make the shadow
On 12/15/2015 05:10 PM, Xiao Guangrong wrote:
On 12/15/2015 03:52 PM, Kai Huang wrote:
static bool __mmu_gfn_lpage_is_disallowed(gfn_t gfn, int level,
@@ -2140,12 +2150,18 @@ static struct kvm_mmu_page
*kvm_mmu_get_page(struct kvm_vcpu *vcpu,
hlist_add_head(&sp->hash_link
On 12/15/2015 04:46 PM, Xiao Guangrong wrote:
On 12/15/2015 03:06 PM, Kai Huang wrote:
Hi Guangrong,
I am starting to review this series, and should have some comments or
questions, you can determine
whether they are valuable :)
Thank you very much for your review and breaking the
On 12/15/2015 05:03 PM, Xiao Guangrong wrote:
On 12/15/2015 04:11 PM, Kai Huang wrote:
On 12/01/2015 02:26 AM, Xiao Guangrong wrote:
The page fault caused by write access on the write tracked page can not
be fixed, it always need to be emulated. page_fault_handle_page_track()
is the fast
On 12/15/2015 04:43 PM, Kai Huang wrote:
On 12/01/2015 02:26 AM, Xiao Guangrong wrote:
Now, all non-leaf shadow page are page tracked, if gfn is not tracked
there is no non-leaf shadow page of gfn is existed, we can directly
make the shadow page of gfn to unsync
Signed-off-by: Xiao
On 12/01/2015 02:26 AM, Xiao Guangrong wrote:
Now, all non-leaf shadow page are page tracked, if gfn is not tracked
there is no non-leaf shadow page of gfn is existed, we can directly
make the shadow page of gfn to unsync
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 26 ---
On 12/01/2015 02:26 AM, Xiao Guangrong wrote:
The page fault caused by write access on the write tracked page can not
be fixed, it always need to be emulated. page_fault_handle_page_track()
is the fast path we introduce here to skip holding mmu-lock and shadow
page table walking
Why can it be o
On 12/15/2015 03:52 PM, Kai Huang wrote:
On 12/01/2015 02:26 AM, Xiao Guangrong wrote:
non-leaf shadow pages are always write protected, it can be the user
of page track
Signed-off-by: Xiao Guangrong
---
arch/x86/include/asm/kvm_page_track.h | 8 +
arch/x86/kvm/mmu.c
On 12/15/2015 03:15 PM, Kai Huang wrote:
On 12/01/2015 02:26 AM, Xiao Guangrong wrote:
These two functions are the user APIs:
- kvm_page_track_add_page(): add the page to the tracking pool after
that later specified access on that page will be tracked
- kvm_page_track_remove_page
On 12/01/2015 02:26 AM, Xiao Guangrong wrote:
non-leaf shadow pages are always write protected, it can be the user
of page track
Signed-off-by: Xiao Guangrong
---
arch/x86/include/asm/kvm_page_track.h | 8 +
arch/x86/kvm/mmu.c| 26 +---
arch/x86/kvm/pa
On 12/01/2015 02:26 AM, Xiao Guangrong wrote:
These two functions are the user APIs:
- kvm_page_track_add_page(): add the page to the tracking pool after
that later specified access on that page will be tracked
- kvm_page_track_remove_page(): remove the page from the tracking pool,
the s
Hi Guangrong,
I am starting to review this series, and should have some comments or
questions, you can determine whether they are valuable :)
See below.
On 12/01/2015 02:26 AM, Xiao Guangrong wrote:
The array, gfn_track[mode][gfn], is introduced in memory slot for every
guest page, this is
->pml_pg);
vmx->pml_pg = NULL;
-
- vmcs_clear_bits(SECONDARY_VM_EXEC_CONTROL,
SECONDARY_EXEC_ENABLE_PML);
}
Thanks,
-Kai
On 11/04/2015 08:00 PM, Paolo Bonzini wrote:
On 04/11/2015 06:46, Kai Huang wrote:
I found PML was broken since below commit:
(!enable_pml). This is more
reasonable as PML is currently either always enabled or disabled. With this
explicit updating SECONDARY_EXEC_ENABLE_PML in vmx_enable{disable}_pml is not
needed so also rename vmx_enable{disable}_pml to vmx_create{destroy}_pml_buffer.
Signed-off-by: Kai Huang
---
Sorry
(!enable_pml). This is more
reasonable as PML is currently either always enabled or disabled. With this
explicit updating SECONDARY_EXEC_ENABLE_PML in vmx_enable{disable}_pml is not
needed so also rename vmx_enable{disable}_pml to vmx_create{destroy}_pml_buffer.
Signed-off-by: Kai Huang
---
v1->
On 11/03/2015 05:59 PM, Paolo Bonzini wrote:
On 03/11/2015 06:49, Kai Huang wrote:
I found PML was broken since below commit:
commit feda805fe7c4ed9cf78158e73b1218752e3b4314
Author: Xiao Guangrong
Date: Wed Sep 9 14:05:55 2015 +0800
KVM: VMX
enabled/disabled on demand by
updating SECONDARY_VM_EXEC_CONTROL, if vmx_cpuid_update is called between the
feature is enabled and disabled.
Fix this by calling vmcs_read32 to read out SECONDARY_VM_EXEC_CONTROL directly.
Signed-off-by: Kai Huang
---
arch/x86/kvm/vmx.c | 2 +-
1 file changed, 1
On 02/05/2015 11:04 PM, Radim Krčmář wrote:
2015-02-05 14:23+0800, Kai Huang:
On 02/03/2015 11:18 PM, Radim Krčmář wrote:
You have it protected by CONFIG_X86_64, but use it unconditionally.
Thanks for catching. This has been fixed by another patch, and the fix has
also been merged by Paolo
On 02/05/2015 11:04 PM, Radim Krčmář wrote:
2015-02-05 14:23+0800, Kai Huang:
On 02/03/2015 11:18 PM, Radim Krčmář wrote:
You have it protected by CONFIG_X86_64, but use it unconditionally.
Thanks for catching. This has been fixed by another patch, and the fix has
also been merged by Paolo
On 02/03/2015 11:53 PM, Radim Krčmář wrote:
2015-01-28 10:54+0800, Kai Huang:
This patch adds new kvm_x86_ops dirty logging hooks to enable/disable dirty
logging for particular memory slot, and to flush potentially logged dirty GPAs
before reporting slot->dirty_bitmap to userspace.
kvm
On 02/03/2015 11:18 PM, Radim Krčmář wrote:
2015-01-28 10:54+0800, Kai Huang:
This patch adds PML support in VMX. A new module parameter 'enable_pml' is added
(+module_param_named(pml, enable_pml, bool, S_IRUGO);)
to allow user to enable/disable it manually.
Signed-off-by:
On 02/04/2015 01:34 AM, Radim Krčmář wrote:
2015-01-28 10:54+0800, Kai Huang:
This patch adds new mmu layer functions to clear/set D-bit for memory slot, and
to write protect superpages for memory slot.
In case of PML, CPU logs the dirty GPA automatically to PML buffer when CPU
updates D-bit
ion out of
CONFIG_X86_64.
Tested with Fengguang's .config, and also did sanity test on x86_64.
Signed-off-by: Kai Huang
---
arch/x86/kvm/trace.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/trace.h b/arch/x86/kvm/trace.h
index a139977..7c7bc8b 100644
---
We don't have to write protect guest memory for dirty logging if architecture
supports hardware dirty logging, such as PML on VMX, so rename it to be more
generic.
Signed-off-by: Kai Huang
---
arch/arm/kvm/mmu.c | 18 --
arch/x86/kvm/mmu.c
to VMX specific. Other ARCHs won't be impacted as these hooks are NULL
for them.
Signed-off-by: Kai Huang
---
arch/x86/include/asm/kvm_host.h | 25 +++
arch/x86/kvm/mmu.c | 6 +++-
arch/x86/kvm/x86.c | 71 -
, we set D-bit manually for the slot with dirty logging disabled.
Signed-off-by: Kai Huang
---
arch/x86/include/asm/kvm_host.h | 9 ++
arch/x86/kvm/mmu.c | 195
2 files changed, 204 insertions(+)
diff --git a/arch/x86/include/asm/kvm_host.h
_region from
'struct kvm_userspace_memory_region *' to 'struct kvm_memory_slot * new', but it
requires changes on other non-x86 ARCH too, so avoid it now.
Signed-off-by: Kai Huang
---
arch/x86/include/asm/kvm_host.h | 3 ++-
arch/x86/kvm/mmu.c | 5 ++-
This patch adds PML support in VMX. A new module parameter 'enable_pml' is added
to allow user to enable/disable it manually.
Signed-off-by: Kai Huang
---
arch/x86/include/asm/vmx.h | 4 +
arch/x86/include/uapi/asm/vmx.h | 1 +
arch/x86/kvm/trace.h| 18
ar
PML.
For the hva <-> pa change case, the spte is updated to either read-only (host
pte is read-only) or be dropped (host pte is writeable), and both cases will be
handled by above changes, therefore no change is necessary.
Signed-off-by: Kai Huang
---
arch/x86/kvm/mmu.
s noticeable performance gain (around 4%~5%) of PML comparing to Write
Protection.
Kai Huang (6):
KVM: Rename kvm_arch_mmu_write_protect_pt_masked to be more generic
for log dirty
KVM: MMU: Add mmu help functions to support PML
KVM: MMU: Explicitly set D-bit for writable spte.
KVM: x86
Apparently no TLB flush is needed when there's no valid rmap in memory slot.
Signed-off-by: Kai Huang
---
arch/x86/kvm/mmu.c | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index f83fc6c..d43bf50 100644
--- a/arch/x86/kvm/
D-bit status to do
specific things.
Sanity test was done on my machine with Intel processor.
Signed-off-by: Kai Huang
---
arch/x86/kvm/mmu.c | 12
1 file changed, 12 insertions(+)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 978f402..1feac0c 100644
--- a/arch/x86/kvm/
Just see the reply. Thanks a lot!
Thanks,
-Kai
On 2014年03月21日 17:03, Kashyap Chamarthy wrote:
On Fri, Mar 21, 2014 at 12:22:54PM +0800, Kai Huang wrote:
Hi,
I see the virtio-balloon is designed for memory auto balloon between
KVM host and guest, but from latest linux kernel mainline code
Thanks Paolo. What's the user space tool / command to trigger the
virtio_balloon functionality? Basically I am looking for the whole code
patch that triggers the virtio_balloon.
Thanks,
-Kai
On 2014年03月21日 16:51, Paolo Bonzini wrote:
Il 21/03/2014 05:22, Kai Huang ha scritto:
Hi,
I se
Hi,
I see the virtio-balloon is designed for memory auto balloon between
KVM host and guest, but from latest linux kernel mainline code, looks
currently there's no consumer actually using it? Would you let me know
who is the consumer if there's any?
Thanks,
-Kai
--
To unsubscribe from this list:
On Sun, Jan 19, 2014 at 10:11 PM, Alex Williamson
wrote:
> On Sun, 2014-01-19 at 22:03 +0800, Kai Huang wrote:
>> On Sat, Jan 18, 2014 at 3:25 AM, Alex Williamson
>> wrote:
>> > From: Alexey Kardashevskiy
>> >
>> > VFIO virtualizes MSIX table for the gu
On Sat, Jan 18, 2014 at 3:25 AM, Alex Williamson
wrote:
> From: Alexey Kardashevskiy
>
> VFIO virtualizes MSIX table for the guest but not mapping the part of
> a BAR which contains an MSIX table. Since vfio_mmap_bar() mmaps chunks
> before and after the MSIX table, they have to be aligned to the
>
> -int iommu_unmap(struct iommu_domain *domain, unsigned long iova, int
> gfp_order)
> +size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t
> size)
> {
> - size_t size, unmapped;
> + size_t unmapped_page, unmapped = 0;
> + unsigned int min_pagesz;
>
>
Clear, thanks!
-- Forwarded message --
From: Alex Williamson
Date: Wed, Nov 2, 2011 at 11:31 PM
Subject: Re: What's the usage model (purpose) of interrupt remapping in IOMMU?
To: Kai Huang
Cc: kvm@vger.kernel.org, linux-...@ger.kernel.org
On Wed, 2011-11-02 at 13:26 +0800
Hi,
In case of direct io, without the interrupt remapping in IOMMU (intel
VT-d or AMD IOMMU), hypervisor needs to inject interrupt for guest
when the guest is scheduled to specific CPU. At the beginning I
thought with IOMMU's interrupt remapping, the hardware can directly
forward the interrupt to
Hi all,
I am working on Intel iommu staff and I have two questions -- just
send to kvm list as I am not sure which mail list should I send to,
and it will be very appreciated if you could help to forward to
related mail list. Thank you!
1) I see in Intel iommu's manual, caching behavior is report
45 matches
Mail list logo