On Tue, 20 Apr 2021 at 18:23, Paolo Bonzini wrote:
>
> On 20/04/21 10:48, Wanpeng Li wrote:
> >> I was thinking of something simpler:
> >>
> >> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> >> index 9b8e30dd5b9b..455c648f9adc 100644
> >
On Tue, 20 Apr 2021 at 15:23, Paolo Bonzini wrote:
>
> On 20/04/21 08:08, Wanpeng Li wrote:
> > On Tue, 20 Apr 2021 at 14:02, Wanpeng Li wrote:
> >>
> >> On Tue, 20 Apr 2021 at 00:59, Paolo Bonzini wrote:
> >>>
> >>> On 19/04/21 18:32
On Tue, 20 Apr 2021 at 14:02, Wanpeng Li wrote:
>
> On Tue, 20 Apr 2021 at 00:59, Paolo Bonzini wrote:
> >
> > On 19/04/21 18:32, Sean Christopherson wrote:
> > > If false positives are a big concern, what about adding another pass to
> > > the loop
>
On Tue, 20 Apr 2021 at 00:59, Paolo Bonzini wrote:
>
> On 19/04/21 18:32, Sean Christopherson wrote:
> > If false positives are a big concern, what about adding another pass to the
> > loop
> > and only yielding to usermode vCPUs with interrupts in the second full pass?
> > I.e. give vCPUs that
On Fri, 2 Apr 2021 at 08:59, Sean Christopherson wrote:
>
> Avoid taking mmu_lock for unrelated .invalidate_range_{start,end}()
> notifications. Because mmu_notifier_count must be modified while holding
> mmu_lock for write, and must always be paired across start->end to stay
> balanced, lock
On Sat, 17 Apr 2021 at 21:09, Paolo Bonzini wrote:
>
> On 16/04/21 05:08, Wanpeng Li wrote:
> > From: Wanpeng Li
> >
> > Both lock holder vCPU and IPI receiver that has halted are condidate for
> > boost. However, the PLE handler was originally designed to
From: Wanpeng Li
Both lock holder vCPU and IPI receiver that has halted are condidate for
boost. However, the PLE handler was originally designed to deal with the
lock holder preemption problem. The Intel PLE occurs when the spinlock
waiter is in kernel mode. This assumption doesn't hold
On Thu, 15 Apr 2021 at 08:49, Sean Christopherson wrote:
>
> On Wed, Apr 14, 2021, Wanpeng Li wrote:
> > On Wed, 14 Apr 2021 at 01:25, Sean Christopherson wrote:
> > >
> > > On Tue, Apr 13, 2021, Wanpeng Li wrote:
> > > > The bugzilla https://
On Wed, 14 Apr 2021 at 01:25, Sean Christopherson wrote:
>
> On Tue, Apr 13, 2021, Wanpeng Li wrote:
> > The bugzilla https://bugzilla.kernel.org/show_bug.cgi?id=209831
> > reported that the guest time remains 0 when running a while true
> > loop in the guest.
> >
On Tue, 13 Apr 2021 at 15:48, Christian Borntraeger
wrote:
>
>
>
> On 13.04.21 09:38, Wanpeng Li wrote:
> > On Tue, 13 Apr 2021 at 15:35, Christian Borntraeger
> > wrote:
> >>
> >>
> >>
> >> On 13.04.21 09:16, Wanpeng Li wrote:
>
On Tue, 13 Apr 2021 at 15:35, Christian Borntraeger
wrote:
>
>
>
> On 13.04.21 09:16, Wanpeng Li wrote:
> [...]
>
> > @@ -145,6 +155,13 @@ static __always_inline void guest_exit_irqoff(void)
> > }
> >
> > #else
> > +static __al
From: Wanpeng Li
The bugzilla https://bugzilla.kernel.org/show_bug.cgi?id=209831
reported that the guest time remains 0 when running a while true
loop in the guest.
The commit 87fa7f3e98a131 ("x86/kvm: Move context tracking where it
belongs") moves guest_exit_irqoff() close to vme
From: Wanpeng Li
Split context_tracking part from guest_enter/exit_irqoff, it will be
called separately in later patches.
Suggested-by: Thomas Gleixner
Cc: Thomas Gleixner
Cc: Sean Christopherson
Cc: Michael Tokarev
Signed-off-by: Wanpeng Li
---
include/linux/context_tracking.h | 42
ing functions for consistent
* place the virt time specific helpers at the proper splot
Suggested-by: Thomas Gleixner
Cc: Thomas Gleixner
Cc: Sean Christopherson
Cc: Michael Tokarev
Wanpeng Li (3):
context_tracking: Split guest_enter/exit_irqoff
context_tracking: Provide separate vtime account
From: Wanpeng Li
Provide separate vtime accounting functions, because having proper
wrappers for that case would be too consistent and less confusing.
Suggested-by: Thomas Gleixner
Cc: Thomas Gleixner
Cc: Sean Christopherson
Cc: Michael Tokarev
Signed-off-by: Wanpeng Li
---
include/linux
From: Wanpeng Li
If the target is self we do not need to yield, we can avoid malicious
guest to play this.
Signed-off-by: Wanpeng Li
---
v1 -> v2:
* update comments
arch/x86/kvm/x86.c | 4
1 file changed, 4 insertions(+)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
in
From: Wanpeng Li
To analyze some performance issues with lock contention and scheduling,
it is nice to know when directed yield are successful or failing.
Signed-off-by: Wanpeng Li
---
v1 -> v2:
* rename new vcpu stat
* account success instead of ignore
arch/x86/include/asm/kvm_hos
From: Wanpeng Li
Enable PV TLB shootdown when !CONFIG_SMP doesn't make sense. Let's
move it inside CONFIG_SMP. In addition, we can avoid define and
alloc __pv_cpu_mask when !CONFIG_SMP and get rid of 'alloc' variable
in kvm_alloc_cpumask.
Signed-off-by: Wanpeng Li
---
v1 -> v2:
* shuf
On Fri, 9 Apr 2021 at 00:56, Sean Christopherson wrote:
>
> On Thu, Apr 08, 2021, Wanpeng Li wrote:
> > From: Wanpeng Li
> >
> > If the target is self we do not need to yield, we can avoid malicious
> > guest to play this.
> >
> > Signed-off-by: Wan
On Fri, 9 Apr 2021 at 01:08, Sean Christopherson wrote:
>
> On Tue, Apr 06, 2021, Wanpeng Li wrote:
> > From: Wanpeng Li
> >
> > To analyze some performance issues with lock contention and scheduling,
> > it is nice to know when directed yield are successful
On Fri, 9 Apr 2021 at 04:20, Sean Christopherson wrote:
>
> On Wed, Apr 07, 2021, Wanpeng Li wrote:
> > From: Wanpeng Li
> >
> > Enable PV TLB shootdown when !CONFIG_SMP doesn't make sense. Let's move
> > it inside CONFIG_SMP. In addition, we can avoid alloc __pv
From: Wanpeng Li
If the target is self we do not need to yield, we can avoid malicious
guest to play this.
Signed-off-by: Wanpeng Li
---
Rebased on
https://lore.kernel.org/kvm/1617697935-4158-1-git-send-email-wanpen...@tencent.com/
arch/x86/kvm/x86.c | 4
1 file changed, 4 insertions
On Fri, 5 Mar 2021 at 09:12, Sean Christopherson wrote:
>
> Check the validity of the PDPTRs before allocating any of the PAE roots,
> otherwise a bad PDPTR will cause KVM to leak any previously allocated
> roots.
>
> Signed-off-by: Sean Christopherson
> ---
> arch/x86/kvm/mmu/mmu.c | 20
and page size. Whether or not the SLOB behavior is by
> design is unknown; it's just as likely that no SLOB users care about
> accounding and so no one has bothered to implemented support in SLOB.
> Regardless, accounting vCPU allocations will not break SLOB+KVM+cgroup
> users, if any exist.
>
&g
From: Wanpeng Li
Enable PV TLB shootdown when !CONFIG_SMP doesn't make sense. Let's move
it inside CONFIG_SMP. In addition, we can avoid alloc __pv_cpu_mask when
!CONFIG_SMP and get rid of 'alloc' variable in kvm_alloc_cpumask.
Signed-off-by: Wanpeng Li
---
arch/x86/kernel/kvm.c | 79
From: Wanpeng Li
To analyze some performance issues with lock contention and scheduling,
it is nice to know when directed yield are successful or failing.
Signed-off-by: Wanpeng Li
---
arch/x86/include/asm/kvm_host.h | 2 ++
arch/x86/kvm/x86.c | 26 --
2
On Wed, 31 Mar 2021 at 11:24, Sean Christopherson wrote:
>
> On Wed, Mar 31, 2021, Wanpeng Li wrote:
> > On Wed, 31 Mar 2021 at 10:32, Sean Christopherson wrote:
> > >
> > > Use GFP_KERNEL_ACCOUNT for the vCPU allocations, the vCPUs are very much
> >
On Wed, 31 Mar 2021 at 10:32, Sean Christopherson wrote:
>
> Use GFP_KERNEL_ACCOUNT for the vCPU allocations, the vCPUs are very much
> tied to a single task/VM. For x86, the allocations were accounted up
> until the allocation code was moved to common KVM. For all other
> architectures, vCPU
id Woodhouse
> Cc: Marcelo Tosatti
> Signed-off-by: Paolo Bonzini
Reviewed-by: Wanpeng Li
> ---
> arch/x86/kvm/x86.c | 25 ++---
> 1 file changed, 14 insertions(+), 11 deletions(-)
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> in
> Cc: David Woodhouse
> Cc: Marcelo Tosatti
> Signed-off-by: Paolo Bonzini
Reviewed-by: Wanpeng Li
> ---
> arch/x86/kvm/x86.c | 10 --
> 1 file changed, 4 insertions(+), 6 deletions(-)
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index fe806e8942
On Tue, 30 Mar 2021 at 01:15, Sean Christopherson wrote:
>
> +Thomas
>
> On Mon, Mar 29, 2021, Wanpeng Li wrote:
> > From: Wanpeng Li
> >
> > The bugzilla https://bugzilla.kernel.org/show_bug.cgi?id=209831
> > reported that the guest time remains 0 when runnin
From: Wanpeng Li
The bugzilla https://bugzilla.kernel.org/show_bug.cgi?id=209831
reported that the guest time remains 0 when running a while true
loop in the guest.
The commit 87fa7f3e98a131 ("x86/kvm: Move context tracking where it
belongs") moves guest_exit_irqoff() close to vme
Cc David Woodhouse,
On Wed, 24 Mar 2021 at 18:11, syzbot
wrote:
>
> Hello,
>
> syzbot found the following issue on:
>
> HEAD commit:1c273e10 Merge tag 'zonefs-5.12-rc4' of git://git.kernel.o..
> git tree: upstream
> console output:
On Wed, 17 Mar 2021 at 16:04, Wanpeng Li wrote:
>
> On Wed, 17 Mar 2021 at 15:57, Michal Hocko wrote:
> >
> > On Wed 17-03-21 13:46:24, Wanpeng Li wrote:
> > > From: Wanpeng Li
> > >
> > > KVM allocations in the arm kvm code which are tied to the l
On Wed, 17 Mar 2021 at 15:57, Michal Hocko wrote:
>
> On Wed 17-03-21 13:46:24, Wanpeng Li wrote:
> > From: Wanpeng Li
> >
> > KVM allocations in the arm kvm code which are tied to the life
> > of the VM process should be charged to the VM process's cgroup.
>
From: Wanpeng Li
KVM allocations in the arm kvm code which are tied to the life
of the VM process should be charged to the VM process's cgroup.
This will help the memcg controler to do the right decisions.
Signed-off-by: Wanpeng Li
---
arch/arm64/kvm/arm.c | 5 +++--
arch
On Sat, 13 Mar 2021 at 17:33, Paolo Bonzini wrote:
>
> On 13/03/21 01:57, Wanpeng Li wrote:
> >> A third option would be to split the paths. In the end, it's only the
> >> ptr/val
> >> line that's shared.
> > I just sent out a formal patch for my alter
From: Wanpeng Li
After commit 997acaf6b4b59c (lockdep: report broken irq restoration), the guest
splatting below during boot:
raw_local_irq_restore() called with IRQs enabled
WARNING: CPU: 1 PID: 169 at kernel/locking/irqflag-debug.c:10
warn_bogus_irq_restore+0x26/0x30
Modules linked
On Thu, 11 Mar 2021 at 23:54, Sean Christopherson wrote:
>
> On Tue, Feb 23, 2021, Wanpeng Li wrote:
> > On Tue, 23 Feb 2021 at 13:25, Wanpeng Li wrote:
> > >
> > > From: Wanpeng Li
> > >
> > > After commit 997acaf6b4b59c (lockdep:
From: Wanpeng Li
After commit 997acaf6b4b59c (lockdep: report broken irq restoration), the guest
splatting below during boot:
raw_local_irq_restore() called with IRQs enabled
WARNING: CPU: 1 PID: 169 at kernel/locking/irqflag-debug.c:10
warn_bogus_irq_restore+0x26/0x30
Modules linked
From: Wanpeng Li
We should execute wbinvd on all dirty pCPUs when guest wbinvd exits
to maintain datat consistency in order to deal with noncoherent DMA.
smp_call_function_many() does not execute the provided function on
the local core, this patch replaces it by on_each_cpu_mask().
Reported
On Fri, 5 Mar 2021 at 10:19, Sean Christopherson wrote:
>
> When posting a deadline timer interrupt, open code the checks guarding
> __kvm_wait_lapic_expire() in order to skip the lapic_timer_int_injected()
> check in kvm_wait_lapic_expire(). The injection check will always fail
> since the
ping,
On Tue, 23 Feb 2021 at 13:25, Wanpeng Li wrote:
>
> From: Wanpeng Li
>
> After commit 997acaf6b4b59c (lockdep: report broken irq restoration), the
> guest
> splatting below during boot:
>
> raw_local_irq_restore() called with IRQs enabled
> WARNING: CPU: 1
ping, :)
On Thu, 4 Mar 2021 at 08:35, Wanpeng Li wrote:
>
> From: Wanpeng Li
>
> Advancing the timer expiration should only be necessary on guest initiated
> writes. When we cancel the timer and clear .pending during state restore,
> clear expired_tscdeadline as well.
>
ping, :)
On Wed, 24 Feb 2021 at 09:38, Wanpeng Li wrote:
>
> From: Wanpeng Li
>
> # lscpu
> Architecture: x86_64
> CPU op-mode(s):32-bit, 64-bit
> Byte Order:Little Endian
> CPU(s):88
> On-line CPU(s) list: 0-63
>
From: Wanpeng Li
Advancing the timer expiration should only be necessary on guest initiated
writes. When we cancel the timer and clear .pending during state restore,
clear expired_tscdeadline as well.
Reviewed-by: Sean Christopherson
Signed-off-by: Wanpeng Li
---
v1 -> v2:
* update pa
On Wed, 3 Mar 2021 at 01:16, Sean Christopherson wrote:
>
> On Tue, Mar 02, 2021, Wanpeng Li wrote:
> > From: Wanpeng Li
> >
> > Advancing the timer expiration should only be necessary on guest initiated
> > writes. Now, we cancel the timer, clear .pending and
From: Wanpeng Li
Advancing the timer expiration should only be necessary on guest initiated
writes. Now, we cancel the timer, clear .pending and clear expired_tscdeadline
at the same time during state restore.
Signed-off-by: Wanpeng Li
---
arch/x86/kvm/lapic.c | 1 +
1 file changed, 1
From: Wanpeng Li
Reported by syzkaller:
KASAN: null-ptr-deref in range [0x0140-0x0147]
CPU: 1 PID: 8370 Comm: syz-executor859 Not tainted 5.11.0-syzkaller #0
RIP: 0010:synic_get arch/x86/kvm/hyperv.c:165 [inline]
RIP: 0010:kvm_hv_set_sint_gsi arch/x86/kvm
On Fri, 26 Feb 2021 at 15:01, syzbot
wrote:
>
> Hello,
>
> syzbot found the following issue on:
>
> HEAD commit:a99163e9 Merge tag 'devicetree-for-5.12' of git://git.kern..
> git tree: upstream
> console output: https://syzkaller.appspot.com/x/log.txt?x=12d72682d0
> kernel config:
From: Wanpeng Li
# lscpu
Architecture: x86_64
CPU op-mode(s):32-bit, 64-bit
Byte Order:Little Endian
CPU(s):88
On-line CPU(s) list: 0-63
Off-line CPU(s) list: 64-87
# cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-5.10.0-rc3-tlinux2-0050+ root=/dev/mapper
On Tue, 23 Feb 2021 at 13:25, Wanpeng Li wrote:
>
> From: Wanpeng Li
>
> After commit 997acaf6b4b59c (lockdep: report broken irq restoration), the
> guest
> splatting below during boot:
>
> raw_local_irq_restore() called with IRQs enabled
> WARNING: CPU: 1 PID: 16
From: Wanpeng Li
After commit 997acaf6b4b59c (lockdep: report broken irq restoration), the guest
splatting below during boot:
raw_local_irq_restore() called with IRQs enabled
WARNING: CPU: 1 PID: 169 at kernel/locking/irqflag-debug.c:10
warn_bogus_irq_restore+0x26/0x30
Modules linked
From: Wanpeng Li
The per-cpu vsyscall pvclock data pointer assigns either an element of the
static array hv_clock_boot (#vCPU <= 64) or dynamically allocated memory
hvclock_mem (vCPU > 64), the dynamically memory will not be allocated if
kvmclock vsyscall is disabled, this can result
On Wed, 27 Jan 2021 at 08:28, Wanpeng Li wrote:
>
> On Wed, 27 Jan 2021 at 01:26, Paolo Bonzini wrote:
> >
> > On 26/01/21 02:28, Wanpeng Li wrote:
> > > ping,
> > > On Mon, 18 Jan 2021 at 17:08, Wanpeng Li wrote:
> > >>
> > >> From:
On Wed, 27 Jan 2021 at 01:26, Paolo Bonzini wrote:
>
> On 26/01/21 02:28, Wanpeng Li wrote:
> > ping,
> > On Mon, 18 Jan 2021 at 17:08, Wanpeng Li wrote:
> >>
> >> From: Wanpeng Li
> >>
> >> The per-cpu vsyscall pvclock data poin
ping,
On Mon, 18 Jan 2021 at 17:08, Wanpeng Li wrote:
>
> From: Wanpeng Li
>
> The per-cpu vsyscall pvclock data pointer assigns either an element of the
> static array hv_clock_boot (#vCPU <= 64) or dynamically allocated memory
> hvclock_mem (vCPU > 64)
On Fri, 15 Jan 2021 at 11:20, Wanpeng Li wrote:
>
> On Wed, 6 Jan 2021 at 08:51, Sean Christopherson wrote:
> >
> > +tglx
> >
> > On Tue, Jan 05, 2021, Nitesh Narayan Lal wrote:
> > > This reverts commit d7a08882a0a4b4e176691331ee3f492996579534.
> >
On Tue, 19 Jan 2021 at 02:27, Paolo Bonzini wrote:
>
> On 15/01/21 02:15, Wanpeng Li wrote:
> >> The comment above should probably be updated as it is not clear why we
> >> check kvm_clock.vdso_clock_mode here. Actually, I would even suggest we
> >> introdu
From: Wanpeng Li
The per-cpu vsyscall pvclock data pointer assigns either an element of the
static array hv_clock_boot (#vCPU <= 64) or dynamically allocated memory
hvclock_mem (vCPU > 64), the dynamically memory will not be allocated if
kvmclock vsyscall is disabled, this can result
On Wed, 6 Jan 2021 at 08:51, Sean Christopherson wrote:
>
> +tglx
>
> On Tue, Jan 05, 2021, Nitesh Narayan Lal wrote:
> > This reverts commit d7a08882a0a4b4e176691331ee3f492996579534.
> >
> > After the introduction of the patch:
> >
> > 87fa7f3e9: x86/kvm: Move context tracking where it
On Thu, 14 Jan 2021 at 21:45, Vitaly Kuznetsov wrote:
>
> Wanpeng Li writes:
>
> > From: Wanpeng Li
> >
> > The per-cpu vsyscall pvclock data pointer assigns either an element of the
> > static array hv_clock_boot (#vCPU <= 64) or dynamically allocat
From: Wanpeng Li
The per-cpu vsyscall pvclock data pointer assigns either an element of the
static array hv_clock_boot (#vCPU <= 64) or dynamically allocated memory
hvclock_mem (vCPU > 64), the dynamically memory will not be allocated if
kvmclock vsyscall is disabled, this can result
On Thu, 7 Jan 2021 at 17:35, Vitaly Kuznetsov wrote:
>
> Sean Christopherson writes:
>
> > On Wed, Jan 06, 2021, Vitaly Kuznetsov wrote:
> >>
> >> Looking back, I don't quite understand why we wanted to account ticks
> >> between vmexit and exiting guest context as 'guest' in the first place;
>
On Wed, 6 Jan 2021 at 06:30, Nitesh Narayan Lal wrote:
>
> This reverts commit d7a08882a0a4b4e176691331ee3f492996579534.
>
> After the introduction of the patch:
>
> 87fa7f3e9: x86/kvm: Move context tracking where it belongs
>
> since we have moved guest_exit_irqoff closer to the VM-Exit,
On Thu, 22 Oct 2020 at 21:02, Paolo Bonzini wrote:
>
> On 22/10/20 03:34, Wanpeng Li wrote:
> > From: Wanpeng Li
> >
> > Per KVM_GET_SUPPORTED_CPUID ioctl documentation:
> >
> > This ioctl returns x86 cpuid features which are supported by both the
From: Wanpeng Li
Per KVM_GET_SUPPORTED_CPUID ioctl documentation:
This ioctl returns x86 cpuid features which are supported by both the
hardware and kvm in its default configuration.
A well-behaved userspace should not set the bit if it is not supported.
Suggested-by: Jim Mattson
Signed-off
Any comments? Paolo! :)
On Wed, 9 Sep 2020 at 11:04, Wanpeng Li wrote:
>
> Any comments? guys!
> On Tue, 1 Sep 2020 at 19:52, wrote:
> >
> > From: Yulei Zhang
> >
> > Currently in KVM memory virtulization we relay on mmu_lock to
> > synchronize the memor
On Fri, 4 Sep 2020 at 19:29, Haiwei Li wrote:
>
> From: Haiwei Li
>
> Add trace_kvm_cr_write and trace_kvm_cr_read for svm.
>
> Signed-off-by: Haiwei Li
Reviewed-by: Wanpeng Li
On Wed, 16 Sep 2020 at 07:29, Sean Christopherson
wrote:
>
> Replace the existing kvm_x86_ops.need_emulation_on_page_fault() with a
> more generic is_emulatable(), and unconditionally call the new function
> in x86_emulate_instruction().
>
> KVM will use the generic hook to support multiple
On Sat, 12 Sep 2020 at 14:20, Paolo Bonzini wrote:
>
> On 09/09/20 10:47, Wanpeng Li wrote:
> >> One more thing:
> >>
> >> VMX version does
> >>
> >> vmx_complete_interrupts(vmx);
> >> if (is_guest_mode(vcpu))
> >&g
From: Wanpeng Li
Analyze is_guest_mode() in svm_vcpu_run() instead of
svm_exit_handlers_fastpath()
in conformity with VMX version.
Suggested-by: Vitaly Kuznetsov
Signed-off-by: Wanpeng Li
---
arch/x86/kvm/svm/svm.c | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git
From: Wanpeng Li
There is missing apic map recalculation after updating DFR, if it is
INIT RESET, in x2apic mode, local apic is software enabled before.
This patch fix it by introducing the function kvm_apic_set_dfr() to
be called in INIT RESET handling path.
Signed-off-by: Wanpeng Li
From: Wanpeng Li
According to SDM 27.2.4, Event delivery causes an APIC-access VM exit.
Don't report internal error and freeze guest when event delivery causes
an APIC-access exit, it is handleable and the event will be re-injected
during the next vmentry.
Signed-off-by: Wanpeng Li
---
arch
From: Wanpeng Li
Moving svm_complete_interrupts() into svm_vcpu_run() which can align VMX
and SVM with respect to completing interrupts.
Suggested-by: Sean Christopherson
Reviewed-by: Vitaly Kuznetsov
Cc: Paul K.
Signed-off-by: Wanpeng Li
---
arch/x86/kvm/svm/svm.c | 3 +--
1 file changed
From: Wanpeng Li
Moving the call to svm_exit_handlers_fastpath() after svm_complete_interrupts()
since svm_complete_interrupts() consumes rip and reenable the function
handle_fastpath_set_msr_irqoff() call in svm_exit_handlers_fastpath().
Suggested-by: Sean Christopherson
Reviewed-by: Vitaly
From: Wanpeng Li
The kick after setting KVM_REQ_PENDING_TIMER is used to handle the timer
fires on a different pCPU which vCPU is running on, this kick is expensive
since memory barrier, rcu, preemption disable/enable operations. We don't
need this kick when injecting already-expired timer, we
From: Wanpeng Li
Analysis from Sean:
| svm->next_rip is reset in svm_vcpu_run() only after calling
| svm_exit_handlers_fastpath(), which will cause SVM's
| skip_emulated_instruction() to write a stale RIP.
Let's get rid of handle_fastpath_set_msr_irqoff() in
svm_exit_handlers_fastp
From: Wanpeng Li
All the checks in lapic_timer_int_injected(), __kvm_wait_lapic_expire(), and
these function calls waste cpu cycles when the timer mode is not tscdeadline.
We can observe ~1.3% world switch time overhead by kvm-unit-tests/vmexit.flat
vmcall testing on AMD server. This patch
From: Wanpeng Li
Check apic_lvtt_tscdeadline() mode directly instead of apic_lvtt_oneshot()
and apic_lvtt_period() to guarantee the timer is in tsc-deadline mode when
wrmsr MSR_IA32_TSCDEADLINE.
Reviewed-by: Sean Christopherson
Signed-off-by: Wanpeng Li
---
arch/x86/kvm/lapic.c | 3 +--
1
From: Wanpeng Li
Return 0 when getting the tscdeadline timer if the lapic is hw disabled.
Suggested-by: Paolo Bonzini
Reviewed-by: Sean Christopherson
Signed-off-by: Wanpeng Li
---
arch/x86/kvm/lapic.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/arch/x86/kvm
Collect sporadic patches for easy apply.
Wanpeng Li (9):
KVM: LAPIC: Return 0 when getting the tscdeadline timer if the lapic
is hw disabled
KVM: LAPIC: Guarantee the timer is in tsc-deadline mode when setting
KVM: LAPIC: Fix updating DFR missing apic map recalculation
KVM: VMX: Don't
On Wed, 9 Sep 2020 at 16:36, Vitaly Kuznetsov wrote:
>
> Wanpeng Li writes:
>
> > From: Wanpeng Li
> >
> > Moving svm_complete_interrupts() into svm_vcpu_run() which can align VMX
> > and SVM with respect to completing interrupts.
> >
> > Sugg
Any Reviewed-by for these two patches? :)
On Wed, 19 Aug 2020 at 16:55, Wanpeng Li wrote:
>
> From: Wanpeng Li
>
> There is missing apic map recalculation after updating DFR, if it is
> INIT RESET, in x2apic mode, local apic is software enabled before.
> This patch fix
On Wed, 9 Sep 2020 at 16:23, Vitaly Kuznetsov wrote:
>
> Wanpeng Li writes:
>
> > From: Wanpeng Li
> >
> > Analysis from Sean:
> >
> > | svm->next_rip is reset in svm_vcpu_run() only after calling
> > | svm_exit_handlers_fastpath(), which w
Any comments? guys!
On Tue, 1 Sep 2020 at 19:52, wrote:
>
> From: Yulei Zhang
>
> Currently in KVM memory virtulization we relay on mmu_lock to
> synchronize the memory mapping update, which make vCPUs work
> in serialize mode and slow down the execution, especially after
> migration to do
From: Wanpeng Li
Moving the call to svm_exit_handlers_fastpath() after svm_complete_interrupts()
since svm_complete_interrupts() consumes rip and reenable the function
handle_fastpath_set_msr_irqoff() call in svm_exit_handlers_fastpath().
Suggested-by: Sean Christopherson
Cc: Paul K.
Signed
From: Wanpeng Li
Moving svm_complete_interrupts() into svm_vcpu_run() which can align VMX
and SVM with respect to completing interrupts.
Suggested-by: Sean Christopherson
Cc: Paul K.
Signed-off-by: Wanpeng Li
---
arch/x86/kvm/svm/svm.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions
From: Wanpeng Li
Analysis from Sean:
| svm->next_rip is reset in svm_vcpu_run() only after calling
| svm_exit_handlers_fastpath(), which will cause SVM's
| skip_emulated_instruction() to write a stale RIP.
Let's get rid of handle_fastpath_set_msr_irqoff() in
svm_exit_handlers_fastp
From: Wanpeng Li
Moving svm_complete_interrupts() into svm_vcpu_run() which can align VMX
and SVM with respect to completing interrupts.
Suggested-by: Sean Christopherson
Cc: Paul K.
Signed-off-by: Wanpeng Li
---
arch/x86/kvm/svm/svm.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions
From: Wanpeng Li
Moving the call to svm_exit_handlers_fastpath() after svm_complete_interrupts()
since svm_complete_interrupts() consumes rip and reenable the function
handle_fastpath_set_msr_irqoff() call in svm_exit_handlers_fastpath().
Suggested-by: Sean Christopherson
Cc: Paul K.
Signed
From: Wanpeng Li
Analysis from Sean:
| svm->next_rip is reset in svm_vcpu_run() only after calling
| svm_exit_handlers_fastpath(), which will cause SVM's
| skip_emulated_instruction() to write a stale RIP.
Let's get rid of handle_fastpath_set_msr_irqoff() in
svm_exit_handlers_fastp
From: Wanpeng Li
All the checks in lapic_timer_int_injected(), __kvm_wait_lapic_expire(), and
these function calls waste cpu cycles when the timer mode is not tscdeadline.
We can observe ~1.3% world switch time overhead by kvm-unit-tests/vmexit.flat
vmcall testing on AMD server. This patch
On Thu, 3 Sep 2020 at 05:23, Sean Christopherson
wrote:
>
> On Fri, Aug 28, 2020 at 09:35:08AM +0800, Wanpeng Li wrote:
> > From: Wanpeng Li
> >
> > per-vCPU timer_advance_ns should be set to 0 if timer mode is not
> > tscdeadline
> > otherwise
On Mon, 31 Aug 2020 at 20:48, Vitaly Kuznetsov wrote:
>
> Wanpeng Li writes:
>
> > From: Wanpeng Li
> >
> > per-vCPU timer_advance_ns should be set to 0 if timer mode is not
> > tscdeadline
> > otherwise we waste cpu cycles i
From: Wanpeng Li
per-vCPU timer_advance_ns should be set to 0 if timer mode is not tscdeadline
otherwise we waste cpu cycles in the function lapic_timer_int_injected(),
especially on AMD platform which doesn't support tscdeadline mode. We can
reset timer_advance_ns to the initial value
On Mon, 24 Aug 2020 at 09:03, Wanpeng Li wrote:
>
> From: Wanpeng Li
>
> The kick after setting KVM_REQ_PENDING_TIMER is used to handle the timer
> fires on a different pCPU which vCPU is running on, this kick is expensive
> since memory barrier, rcu, preemption disable/ena
ping, :)
On Wed, 19 Aug 2020 at 16:55, Wanpeng Li wrote:
>
> From: Wanpeng Li
>
> There is missing apic map recalculation after updating DFR, if it is
> INIT RESET, in x2apic mode, local apic is software enabled before.
> This patch fix it by introducing the functio
ping :)
On Wed, 12 Aug 2020 at 14:30, Wanpeng Li wrote:
>
> From: Wanpeng Li
>
> Check apic_lvtt_tscdeadline() mode directly instead of apic_lvtt_oneshot()
> and apic_lvtt_period() to guarantee the timer is in tsc-deadline mode when
> wrmsr MSR_IA32_TSCDEADLINE.
>
> S
From: Wanpeng Li
The kick after setting KVM_REQ_PENDING_TIMER is used to handle the timer
fires on a different pCPU which vCPU is running on, this kick is expensive
since memory barrier, rcu, preemption disable/enable operations. We don't
need this kick when injecting already-expired timer
1 - 100 of 5240 matches
Mail list logo