On Mon, 2021-11-29 at 17:25 +, Sean Christopherson wrote:
> On Mon, Nov 29, 2021, Maxim Levitsky wrote:
> > (This thing is that when you tell the IOMMU that a vCPU is not running,
> > Another thing I discovered that this patch series totally breaks my VMs,
> > withou
On Mon, 2021-11-29 at 17:25 +, Sean Christopherson wrote:
> On Mon, Nov 29, 2021, Maxim Levitsky wrote:
> > (This thing is that when you tell the IOMMU that a vCPU is not running,
> > Another thing I discovered that this patch series totally breaks my VMs,
> > withou
On Thu, 2021-12-02 at 12:20 +0200, Maxim Levitsky wrote:
> On Mon, 2021-11-29 at 17:25 +, Sean Christopherson wrote:
> > On Mon, Nov 29, 2021, Maxim Levitsky wrote:
> > > (This thing is that when you tell the IOMMU that a vCPU is not running,
> > > Another thing I
On Tue, 2021-11-30 at 00:53 +0200, Maxim Levitsky wrote:
> On Mon, 2021-11-29 at 20:18 +0100, Paolo Bonzini wrote:
> > On 11/29/21 19:55, Sean Christopherson wrote:
> > > > Still it does seem to be a race that happens when IS_RUNNING=true but
> > > > vcpu->mod
> I think it's LatencyMon, https://www.resplendence.com/latencymon.
>
> Paolo
>
Yes.
Best regards,
Maxim Levitsky
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
but I still kept this tool to keep
an eye on it).
I really need to write a kvm unit test to stress test IPIs, especially this
case,
I will do this very soon.
Wei Huang, any info on this would be very helpful.
Maybe putting the avic physical table in
On Wed, 2021-10-27 at 16:40 +0300, Maxim Levitsky wrote:
> On Fri, 2021-10-08 at 19:12 -0700, Sean Christopherson wrote:
> > Invoke the arch hooks for block+unblock if and only if KVM actually
> > attempts to block the vCPU. The only non-nop implementation is on x86,
> > sp
On Thu, 2021-10-28 at 17:19 +, Sean Christopherson wrote:
> On Thu, Oct 28, 2021, Maxim Levitsky wrote:
> > On Fri, 2021-10-08 at 19:12 -0700, Sean Christopherson wrote:
> > > Remove the vCPU from the wakeup list before updating the notification
> > > vector in the
On Thu, 2021-10-28 at 15:55 +, Sean Christopherson wrote:
> On Thu, Oct 28, 2021, Maxim Levitsky wrote:
> > On Fri, 2021-10-08 at 19:12 -0700, Sean Christopherson wrote:
> > > Use READ_ONCE() when loading the posted interrupt descriptor control
> > > field to
On Thu, 2021-10-28 at 16:12 +, Sean Christopherson wrote:
> On Thu, Oct 28, 2021, Maxim Levitsky wrote:
> > On Fri, 2021-10-08 at 19:12 -0700, Sean Christopherson wrote:
> > > Hoist the CPU => APIC ID conversion for the Posted Interrupt descriptor
> > > out of th
t;
I also think so, and maybe this can be added to the commit message.
Anyway, last one for the series :)
Reviewed-by: Maxim Levitsky
Best regards,
Maxim Levitsky
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
, POSTED_INTR_VECTOR))
> kvm_vcpu_wake_up(vcpu);
>
> return 0;
I both like and don't like this patch.
It is indeed a bit more more self documented, but then it allows caller to
pass anything other than POSTED_INTR_NESTED_VECTOR/POSTED_INTR_
sted
paths were identical
before as well, so this patch could be done without patch 41 as well.
Reviewed-by: Maxim Levitsky
Best regards,
Maxim Levitsky
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson
> ---
> arch/x86/kvm/vmx/vmx.c
low for additional cleanups.
> >
> > It also aligns with SVM a little bit more (especially given patch 35),
> > doesn't it?
>
> Yes, aligning VMX and SVM APICv behavior as much as possible is definitely a
> goal
> of this series, though I suspect I failed to state that
Paolo's excellent LWN series of
articles on memory barriers,
to refresh my knowledge of the memory barriers and understand the above
analysis better.
https://lwn.net/Articles/844224/
I agree with the above, but this is something that is so easy to make a mistake
that I can't be 100% sure.
Best regards,
Maxim Levitsky
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
>
> vcpu->stat.generic.blocking = 1;
>
> - kvm_arch_vcpu_blocking(vcpu);
> -
> prepare_to_rcuwait(wait);
> for (;;) {
> set_current_state(TASK_INTERRUPTIBLE);
> @@ -3224,8 +3222,6 @@ bool kvm_vcpu_block(struct kvm_vcpu *vcpu)
> }
> finish_rcuwait(wait);
>
> - kvm_arch_vcpu_unblocking(vcpu);
> -
> vcpu->stat.generic.blocking = 0;
>
> return waited;
Reviewed-by: Maxim Levitsky
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
t_irq,
> uint32_t guest_irq, bool set);
> -void svm_vcpu_blocking(struct kvm_vcpu *vcpu);
> -void svm_vcpu_unblocking(struct kvm_vcpu *vcpu);
>
> /* sev.c */
>
Looks good. It is nice to get rid of all of this logic that was just making
things more complic
) & AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK);
> -}
> -
> int avic_ga_log_notifier(u32 ga_tag);
> void avic_vm_destroy(struct kvm *kvm);
> int avic_vm_init(struct kvm *kvm);
I guess this makes sense to do, to get rid of the avic_vcpu_is_running.
As you explained in previous
*vcpu);
> void pi_wakeup_handler(void);
> void __init pi_init_cpu(int cpu);
> bool pi_has_pending_interrupt(struct kvm_vcpu *vcpu);
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 5517893f12fc..26ed8cd1a1f2 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -7462,9 +7462,6 @@ void vmx_update_cpu_dirty_logging(struct kvm_vcpu *vcpu)
>
> static int vmx_pre_block(struct kvm_vcpu *vcpu)
> {
> - if (pi_pre_block(vcpu))
> - return 1;
> -
> if (kvm_lapic_hv_timer_in_use(vcpu))
> kvm_lapic_switch_to_sw_timer(vcpu);
>
> @@ -7475,8 +7472,6 @@ static void vmx_post_block(struct kvm_vcpu *vcpu)
> {
> if (kvm_x86_ops.set_hv_timer)
> kvm_lapic_switch_to_hv_timer(vcpu);
> -
> - pi_post_block(vcpu);
> }
>
> static void vmx_setup_mce(struct kvm_vcpu *vcpu)
Looks OK to me, and IMHO is a very good step in direction to simplify that code,
but the logic is far from beeing simple so I might have missed something.
Especially, this should be tested with nested APICv, which I don't yet know well
enough to know if this can break it or not.
Reviewed-by: Maxim Levitsky
Best regards,
Maxim Levitsky
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
I don't know how
expensive
is kvm_vcpu_wake_up in this case.
Before this patch, the avic_vcpu_is_running would only be false when the vCPU
is scheduled out
(e.g when vcpu_put was done on it)
Best regards,
Maxim Levitsky
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
vCPU, and the
> @@ -9921,9 +9920,6 @@ static inline int vcpu_block(struct kvm *kvm, struct
> kvm_vcpu *vcpu)
> if (hv_timer)
> kvm_lapic_switch_to_hv_timer(vcpu);
>
> - if (kvm_x86_ops.post_block)
> - static_call(kvm_x86_post_block)(vcpu);
> -
> if (!kvm_check_request(KVM_REQ_UNHALT, vcpu))
> return 1;
> }
Reviewed-by: Maxim Levitsky
Best regards,
Maxim Levitsky
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
On Thu, 2021-10-28 at 14:28 +0300, Maxim Levitsky wrote:
> On Fri, 2021-10-08 at 19:12 -0700, Sean Christopherson wrote:
> > Hoist the CPU => APIC ID conversion for the Posted Interrupt descriptor
> > out of the loop to write the descriptor, preemption is disabled so the
&g
start_sw_timer(apic);
> preempt_enable();
> }
> -EXPORT_SYMBOL_GPL(kvm_lapic_switch_to_sw_timer);
>
> void kvm_lapic_restart_hv_timer(struct kvm_vcpu *vcpu)
> {
Reviewed-by: Maxim Levitsky
Best regards,
Maxim Levitsky
___
kv
(struct kvm_vcpu *vcpu)
> r = -EINTR;
> goto out;
> }
> + /*
> + * It should be impossible for the hypervisor timer to be in
> + * use before KVM has ever run the
/virt/kvm/kvm_main.c
> @@ -426,7 +426,6 @@ static void kvm_vcpu_init(struct kvm_vcpu *vcpu, struct
> kvm *kvm, unsigned id)
> #endif
> kvm_async_pf_vcpu_init(vcpu);
>
> - vcpu->pre_pcpu = -1;
> INIT_LIST_HEAD(>blocked_vcpu_list);
>
> k
lude/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -304,8 +304,6 @@ struct kvm_vcpu {
> u64 requests;
> unsigned long guest_debug;
>
> - struct list_head blocked_vcpu_list;
> -
> struct mutex mutex;
> struct kvm_run *run;
on_cpu_lock, vcpu->pre_pcpu));
> + list_add_tail(>blocked_vcpu_list,
> + _cpu(blocked_vcpu_on_cpu, vcpu->pre_pcpu));
> + spin_unlock(_cpu(blocked_vcpu_on_cpu_lock, vcpu->pre_pcpu));
>
> WARN(pi_desc->sn == 1,
>"Posted Interrupt Suppress Notification set before blocking");
Reviewed-by: Maxim Levitsky
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
dest << 8) & 0xFF00;
It would be nice to have a function for this, this appears in this file twice.
Maybe there is a function already somewhere?
> +
> + do {
> + old.control = new.control = READ_ONCE(pi_desc->
led.
Best regards,
Maxim Levitsky
>
> Signed-off-by: Sean Christopherson
> ---
> arch/x86/kvm/vmx/posted_intr.c | 25 +++--
> 1 file changed, 11 insertions(+), 14 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/posted_intr.c b/arch/x86/kvm/vmx/posted_in
gt; do {
> - old.control = new.control = pi_desc->control;
> + old.control = new.control = READ_ONCE(pi_desc->control);
>
> /* set 'NV' to 'wakeup vector' */
> new.nv = POSTED_INTR_WAKEUP_VECTOR;
I wish there was a way to
u == -1)
> return;
>
> - WARN_ON(irqs_disabled());
> - local_irq_disable();
> + local_irq_save(flags);
> __pi_post_block(vcpu);
> - local_irq_enable();
> + local_irq_restore(flags);
> }
>
> /*
Reviewed-by: Maxim Levitsky
Best regards,
Maxim Levitsky
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
f posted-interrupts "
> - "is set before blocking\n");
> -
> /*
>* Since vCPU can be preempted during this process,
>* vcpu->cpu could be different with pre_
_pcpu);
> -
> - if (x2apic_mode)
> - new.ndst = dest;
> - else
> - new.ndst = (dest << 8) & 0xFF00;
> -
> /* set 'NV' to 'wakeup vector' */
> new.nv = POSTED_INTR_WAKEUP_VECTOR;
> } while (cmpxchg64(_desc->control, old.control,
Reviewed-by : Maxim Levitsky
Best regards,
Maxim Levitsky
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
picv || !lapic_in_kernel(vcpu))
> + return;
> +
> + /* Nothing to do if PI.SN==0 and the vCPU isn't being migrated. */
> if (!pi_test_sn(pi_desc) && vcpu->cpu == cpu)
> return;
>
Reviewed-by: Maxim Levitsky
Best regards,
Maxim
> if (vcpu == me)
> continue;
> - if (rcuwait_active(kvm_arch_vcpu_get_wait(vcpu)) &&
> - !vcpu_dy_runnable(vcpu))
> + if (kvm_vcpu_is_blocking(vcpu) &&
> !vcpu_dy_runnable(vcpu))
> continue;
> if (READ_ONCE(vcpu->preempted) && yield_to_kernel_mode
> &&
> !kvm_arch_dy_has_pending_interrupt(vcpu) &&
Reviewed-by: Maxim Levitsky
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
)
> +static inline bool pi_test_on(struct pi_desc *pi_desc)
> {
> return test_bit(POSTED_INTR_ON,
> (unsigned long *)_desc->control);
> }
>
> -static inline int pi_test_sn(struct pi_desc *pi_desc)
> +static inline bool pi_test_sn(struct pi_des
srcu_read_unlock(>srcu, vcpu->srcu_idx);
> - kvm_vcpu_halt(vcpu);
> + if (vcpu->arch.mp_state == KVM_MP_STATE_HALTED)
> + kvm_vcpu_halt(vcpu);
> + else
> + kvm_vcpu_block(vcpu);
> vcpu-&g
case where hardware correctly predicts do_halt_poll and
> there are no interrupts, "start" is probably only a few cycles old)
> and either approach is perfectly ok. But it's more precise to count
> any extra latency toward the halt-polling time.
>
&g
(vcpu);
> if (kvm_apic_accept_events(vcpu) < 0) {
> r = 0;
> goto out;
Makes sense.
Reviewed-by: Maxim Levitsky
Best regards,
Maxim levitsky
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
uld you elaborate on why you choose _noskip suffix?
As far as I see, kvm_vcpu_halt just calls __kvm_vcpu_halt with new VCPU run
state/exit reason,
which is used only when local apic is not in the kernel (which is these days
not that
supported configuration).
Other user of __kvm_vcpu_halt
On Wed, 2021-10-27 at 17:10 +0300, Maxim Levitsky wrote:
> On Fri, 2021-10-08 at 19:12 -0700, Sean Christopherson wrote:
> > Rename a variety of HLT-related helpers to free up the function name
> > "kvm_vcpu_halt" for future use in generic KVM code, e.g. to differe
= ktime_get();
> if (waited) {
> vcpu->stat.generic.halt_wait_ns +=
> @@ -3273,7 +3275,6 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu)
> ktime_to_ns(cur) - ktime_to_ns(poll_end));
> }
> out:
> - kvm_arch_vcpu_unblock
}
> cpu_relax();
> poll_end = cur = ktime_get();
> } while (kvm_vcpu_can_poll(cur, stop));
> -
> - KVM_STATS_LOG_HIST_UPDATE(
> - vcpu->stat.generic.halt_poll_fail_hist,
> - ktime_to_ns(ktime_get()) - ktime_to_ns(start));
> }
>
>
Reviewed-by: Maxim Levitsky
Best regards,
Maxim Levitsky
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
- update_halt_poll_stats(
> - vcpu, ktime_to_ns(ktime_sub(poll_end, start)), waited);
> + update_halt_poll_stats(vcpu, start, poll_end, !waited);
>
> if (halt_poll_allowed) {
> if (!vcpu_valid_wakeup(vcpu)) {
Reviewed-
e_halt_poll_stats(
> - vcpu, ktime_to_ns(ktime_sub(poll_end, start)), waited);
> + if (do_halt_poll)
> + update_halt_poll_stats(
> + vcpu, ktime_to_ns(ktime_sub(poll_end, start)), waited);
>
> if (halt_poll_allowed) {
>
cpu->cpu can change any moment anyway, adding READ_ONCE I think can't
really fix anything
but I do agree that it makes this more readable.
Reviewed-by: Maxim Levitsky
>
> Functionally, signalling the wrong CPU in this case is not an issue as
> task migration means the vCPU has exi
On Fri, 2021-04-02 at 10:38 -0700, Paolo Bonzini wrote:
> On 01/04/21 15:54, Maxim Levitsky wrote:
> > Hi!
> >
> > I would like to publish two debug features which were needed for other stuff
> > I work on.
> >
> > One is the reworked lx-symbols scri
On Fri, 2021-04-02 at 19:38 +0200, Paolo Bonzini wrote:
> On 01/04/21 15:54, Maxim Levitsky wrote:
> > Hi!
> >
> > I would like to publish two debug features which were needed for other stuff
> > I work on.
> >
> > One is the reworked lx-symbols scri
.gd22...@pd.tnic/
CC: Borislav Petkov
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/x86.c | 3 +++
arch/x86/kvm/x86.h | 2 ++
2 files changed, 5 insertions(+)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 3627ce8fe5bb..1a51031d64d8 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm
happen, but at least this eliminates the common
case.
Signed-off-by: Maxim Levitsky
---
Documentation/virt/kvm/api.rst | 1 +
arch/x86/include/asm/kvm_host.h | 3 ++-
arch/x86/include/uapi/asm/kvm.h | 1 +
arch/x86/kvm/x86.c | 4
4 files changed, 8 insertions(+), 1 deletion
Currently #TS interception is only done once.
Also exception interception is not enabled for SEV guests.
Signed-off-by: Maxim Levitsky
---
arch/x86/include/asm/kvm_host.h | 2 +
arch/x86/kvm/svm/svm.c | 70 +
arch/x86/kvm/svm/svm.h | 6
Split the check for having a vmexit handler to
svm_check_exit_valid, and make svm_handle_invalid_exit
only handle a vmexit that is already not valid.
Signed-off-by: Maxim Levitsky
---
arch/x86/kvm/svm/svm.c | 17 +
1 file changed, 9 insertions(+), 8 deletions(-)
diff --git
This capability will allow the user to know which KVM_GUESTDBG_* bits
are supported.
Signed-off-by: Maxim Levitsky
---
Documentation/virt/kvm/api.rst | 3 +++
include/uapi/linux/kvm.h | 1 +
2 files changed, 4 insertions(+)
diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt
Move KVM_GUESTDBG_VALID_MASK to kvm_host.h
and use it to return the value of this capability.
Compile tested only.
Signed-off-by: Maxim Levitsky
---
arch/arm64/include/asm/kvm_host.h | 4
arch/arm64/kvm/arm.c | 2 ++
arch/arm64/kvm/guest.c| 5 -
3 files changed
Store the supported bits into KVM_GUESTDBG_VALID_MASK
macro, similar to how arm does this.
Signed-off-by: Maxim Levitsky
---
arch/x86/include/asm/kvm_host.h | 9 +
arch/x86/kvm/x86.c | 2 ++
2 files changed, 11 insertions(+)
diff --git a/arch/x86/include/asm/kvm_host.h b
Define KVM_GUESTDBG_VALID_MASK and use it to implement this capabiity.
Compile tested only.
Signed-off-by: Maxim Levitsky
---
arch/s390/include/asm/kvm_host.h | 4
arch/s390/kvm/kvm-s390.c | 3 +++
2 files changed, 7 insertions(+)
diff --git a/arch/s390/include/asm/kvm_host.h b
)
Signed-off-by: Maxim Levitsky
---
kernel/module.c | 8 +-
scripts/gdb/linux/symbols.py | 203 +++
2 files changed, 143 insertions(+), 68 deletions(-)
diff --git a/kernel/module.c b/kernel/module.c
index 30479355ab85..ea81fc06ea1f 100644
--- a/kernel
,
Maxim Levitsky
Maxim Levitsky (9):
scripts/gdb: rework lx-symbols gdb script
KVM: introduce KVM_CAP_SET_GUEST_DEBUG2
KVM: x86: implement KVM_CAP_SET_GUEST_DEBUG2
KVM: aarch64: implement KVM_CAP_SET_GUEST_DEBUG2
KVM: s390x: implement KVM_CAP_SET_GUEST_DEBUG2
KVM: x86: implement
On Mon, 2020-05-18 at 13:51 +0200, Paolo Bonzini wrote:
> On 18/05/20 13:34, Maxim Levitsky wrote:
> > > In high-performance configurations, most of the time virtio devices are
> > > processed in another thread that polls on the virtio rings. In this
> > > s
replaced
by a userspace driver,
something I see a lot lately, and what was the ground for rejection of my
nvme-mdev proposal.
Best regards,
Maxim Levitsky
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
60 matches
Mail list logo