Re: [PATCH] KVM: PPC: Book3S HV: Tracepoints for KVM HV guest interactions
Suresh E. Warrier warr...@linux.vnet.ibm.com writes: This patch adds trace points in the guest entry and exit code and also for exceptions handled by the host in kernel mode - hypercalls and page faults. The new events are added to /sys/kernel/debug/tracing/events under a new subsystem called kvm_hv. /* Set this explicitly in case thread 0 doesn't have a vcpu */ @@ -1687,6 +1691,9 @@ static void kvmppc_run_core(struct kvmppc_vcore *vc) vc-vcore_state = VCORE_RUNNING; preempt_disable(); + + trace_kvmppc_run_core(vc, 0); + spin_unlock(vc-lock); Do we really want to call tracepoint with spin lock held ? Is that a good thing to do ?. -aneesh -- To unsubscribe from this list: send the line unsubscribe kvm-ppc in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] KVM: PPC: Book3S HV: Tracepoints for KVM HV guest interactions
On 20.11.14 11:40, Aneesh Kumar K.V wrote: Suresh E. Warrier warr...@linux.vnet.ibm.com writes: This patch adds trace points in the guest entry and exit code and also for exceptions handled by the host in kernel mode - hypercalls and page faults. The new events are added to /sys/kernel/debug/tracing/events under a new subsystem called kvm_hv. /* Set this explicitly in case thread 0 doesn't have a vcpu */ @@ -1687,6 +1691,9 @@ static void kvmppc_run_core(struct kvmppc_vcore *vc) vc-vcore_state = VCORE_RUNNING; preempt_disable(); + +trace_kvmppc_run_core(vc, 0); + spin_unlock(vc-lock); Do we really want to call tracepoint with spin lock held ? Is that a good thing to do ?. I thought it was safe to call tracepoints inside of spin lock regions? Steve? Alex -- To unsubscribe from this list: send the line unsubscribe kvm-ppc in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] KVM: PPC: Book3S HV: Tracepoints for KVM HV guest interactions
On Thu, 20 Nov 2014 13:10:12 +0100 Alexander Graf ag...@suse.de wrote: On 20.11.14 11:40, Aneesh Kumar K.V wrote: Suresh E. Warrier warr...@linux.vnet.ibm.com writes: This patch adds trace points in the guest entry and exit code and also for exceptions handled by the host in kernel mode - hypercalls and page faults. The new events are added to /sys/kernel/debug/tracing/events under a new subsystem called kvm_hv. /* Set this explicitly in case thread 0 doesn't have a vcpu */ @@ -1687,6 +1691,9 @@ static void kvmppc_run_core(struct kvmppc_vcore *vc) vc-vcore_state = VCORE_RUNNING; preempt_disable(); + + trace_kvmppc_run_core(vc, 0); + spin_unlock(vc-lock); Do we really want to call tracepoint with spin lock held ? Is that a good thing to do ?. I thought it was safe to call tracepoints inside of spin lock regions? Steve? There's tracepoints in the guts of the scheduler where rq lock is held. Don't worry about it. The tracing system is lockless. -- Steve -- To unsubscribe from this list: send the line unsubscribe kvm-ppc in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] KVM: PPC: Book3S HV: Add missing HPTE unlock
On 05.11.14 02:21, Paul Mackerras wrote: From: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com In kvm_test_clear_dirty(), if we find an invalid HPTE we move on to the next HPTE without unlocking the invalid one. In fact we should never find an invalid and unlocked HPTE in the rmap chain, but for robustness we should unlock it. This adds the missing unlock. Reported-by: Benjamin Herrenschmidt b...@kernel.crashing.org Signed-off-by: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com Signed-off-by: Paul Mackerras pau...@samba.org Thanks, applied to kvm-ppc-queue. Alex -- To unsubscribe from this list: send the line unsubscribe kvm-ppc in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] KVM: PPC: Book3S HV: ptes are big endian
On 03.11.14 16:35, Cédric Le Goater wrote: When being restored from qemu, the kvm_get_htab_header are in native endian, but the ptes are big endian. This patch fixes restore on a KVM LE host. Qemu also needs a fix for this : http://lists.nongnu.org/archive/html/qemu-ppc/2014-11/msg8.html Signed-off-by: Cédric Le Goater c...@fr.ibm.com Cc: Paul Mackerras pau...@samba.org Cc: Alexey Kardashevskiy a...@ozlabs.ru Cc: Gregory Kurz gk...@linux.vnet.ibm.com --- Tested on 3.17-rc7 with LE and BE host. arch/powerpc/kvm/book3s_64_mmu_hv.c |2 ++ 1 file changed, 2 insertions(+) Index: linux-3.18-hv.git/arch/powerpc/kvm/book3s_64_mmu_hv.c === --- linux-3.18-hv.git.orig/arch/powerpc/kvm/book3s_64_mmu_hv.c +++ linux-3.18-hv.git/arch/powerpc/kvm/book3s_64_mmu_hv.c @@ -1542,6 +1542,8 @@ static ssize_t kvm_htab_write(struct fil err = -EFAULT; if (__get_user(v, lbuf) || __get_user(r, lbuf + 1)) goto out; + v = be64_to_cpu(v); + r = be64_to_cpu(r); This will trigger warnings with sparse. Please introduce new be64 variables that you do get_user on and that you then use as source for v and r. Alex -- To unsubscribe from this list: send the line unsubscribe kvm-ppc in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 2/5] KVM: PPC: Book3S HV: Fix an issue where guest is paused on receiving HMI
On 03.11.14 05:51, Paul Mackerras wrote: From: Mahesh Salgaonkar mah...@linux.vnet.ibm.com When we get an HMI (hypervisor maintenance interrupt) while in a guest, we see that guest enters into paused state. The reason is, in kvmppc_handle_exit_hv it falls through default path and returns to host instead of resuming guest. This causes guest to enter into paused state. HMI is a hypervisor only interrupt and it is safe to resume the guest since the host has handled it already. This patch adds a switch case to resume the guest. Without this patch we see guest entering into paused state with following console messages: [ 3003.329351] Severe Hypervisor Maintenance interrupt [Recovered] [ 3003.329356] Error detail: Timer facility experienced an error [ 3003.329359]HMER: 0840 [ 3003.329360]TFMR: 4a12000980a84000 [ 3003.329366] vcpu c007c35094c0 (40): [ 3003.329368] pc = c00c2ba0 msr = 80009032 trap = e60 [ 3003.329370] r 0 = c021ddc0 r16 = 0046 [ 3003.329372] r 1 = c0007a02bbd0 r17 = 327d5d98 [ 3003.329375] r 2 = c10980b8 r18 = 1fc9a0b0 [ 3003.329377] r 3 = c142d6b8 r19 = c142d6b8 [ 3003.329379] r 4 = 0002 r20 = [ 3003.329381] r 5 = c524a110 r21 = [ 3003.329383] r 6 = 0001 r22 = [ 3003.329386] r 7 = r23 = c524a110 [ 3003.329388] r 8 = r24 = 0001 [ 3003.329391] r 9 = 0001 r25 = c0007c31da38 [ 3003.329393] r10 = c14280b8 r26 = 0002 [ 3003.329395] r11 = 746f6f6c2f68656c r27 = c524a110 [ 3003.329397] r12 = 28004484 r28 = c0007c31da38 [ 3003.329399] r13 = cfe01400 r29 = 0002 [ 3003.329401] r14 = 0046 r30 = c3011e00 [ 3003.329403] r15 = ffba r31 = 0002 [ 3003.329404] ctr = c041a670 lr = c0272520 [ 3003.329405] srr0 = c007e8d8 srr1 = 90001002 [ 3003.329406] sprg0 = sprg1 = cfe01400 [ 3003.329407] sprg2 = cfe01400 sprg3 = 0005 [ 3003.329408] cr = 48004482 xer = 2000 dsisr = 4200 [ 3003.329409] dar = 010015020048 [ 3003.329410] fault dar = 010015020048 dsisr = 4200 [ 3003.329411] SLB (8 entries): [ 3003.329412] ESID = c800 VSID = 40016e7779000510 [ 3003.329413] ESID = d801 VSID = 400142add1000510 [ 3003.329414] ESID = f804 VSID = 4000eb1a81000510 [ 3003.329415] ESID = 1f00080b VSID = 40004fda0a000d90 [ 3003.329416] ESID = 3f00080c VSID = 400039f536000d90 [ 3003.329417] ESID = 180d VSID = 0001251b35150d90 [ 3003.329417] ESID = 0100080e VSID = 4001e4609d90 [ 3003.329418] ESID = d8000819 VSID = 40013d349c000400 [ 3003.329419] lpcr = c04881847001 sdr1 = 001b1906 last_inst = [ 3003.329421] trap=0xe60 | pc=0xc00c2ba0 | msr=0x80009032 [ 3003.329524] Severe Hypervisor Maintenance interrupt [Recovered] [ 3003.329526] Error detail: Timer facility experienced an error [ 3003.329527]HMER: 0840 [ 3003.329527]TFMR: 4a12000980a94000 [ 3006.359786] Severe Hypervisor Maintenance interrupt [Recovered] [ 3006.359792] Error detail: Timer facility experienced an error [ 3006.359795]HMER: 0840 [ 3006.359797]TFMR: 4a12000980a84000 IdName State 2 guest2 running 3 guest3 paused 4 guest4 running Signed-off-by: Mahesh Salgaonkar mah...@linux.vnet.ibm.com Signed-off-by: Paul Mackerras pau...@samba.org Do we need this for PR running on bare metal as well? Alex -- To unsubscribe from this list: send the line unsubscribe kvm-ppc in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 0/5] Some fixes for HV KVM on PPC
On 03.11.14 05:51, Paul Mackerras wrote: Here are fixes for five bugs which were found in the testing of our PowerKVM product. The bugs range from guest performance issues to guest crashes and memory corruption. Please apply. Thanks, applied patches 1-4 to kvm-ppc-queue. Alex -- To unsubscribe from this list: send the line unsubscribe kvm-ppc in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 5/5] KVM: PPC: Book3S HV: Check wait conditions before sleeping in kvmppc_vcore_blocked
On 03.11.14 05:52, Paul Mackerras wrote: From: Suresh E. Warrier warr...@linux.vnet.ibm.com The kvmppc_vcore_blocked() code does not check for the wait condition after putting the process on the wait queue. This means that it is possible for an external interrupt to become pending, but the vcpu to remain asleep until the next decrementer interrupt. The fix is to make one last check for pending exceptions and ceded state before calling schedule(). Signed-off-by: Suresh Warrier warr...@linux.vnet.ibm.com Signed-off-by: Paul Mackerras pau...@samba.org I don't understand the race you're fixing here. Can you please explain it? Alex --- arch/powerpc/kvm/book3s_hv.c | 20 1 file changed, 20 insertions(+) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index cd7e030..1a7a281 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -1828,9 +1828,29 @@ static void kvmppc_wait_for_exec(struct kvm_vcpu *vcpu, int wait_state) */ static void kvmppc_vcore_blocked(struct kvmppc_vcore *vc) { + struct kvm_vcpu *vcpu; + int do_sleep = 1; + DEFINE_WAIT(wait); prepare_to_wait(vc-wq, wait, TASK_INTERRUPTIBLE); + + /* + * Check one last time for pending exceptions and ceded state after + * we put ourselves on the wait queue + */ + list_for_each_entry(vcpu, vc-runnable_threads, arch.run_list) { + if (vcpu-arch.pending_exceptions || !vcpu-arch.ceded) { + do_sleep = 0; + break; + } + } + + if (!do_sleep) { + finish_wait(vc-wq, wait); + return; + } + vc-vcore_state = VCORE_SLEEPING; spin_unlock(vc-lock); schedule(); -- To unsubscribe from this list: send the line unsubscribe kvm-ppc in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v2] KVM: PPC: Book3S HV: ptes are big endian
When being restored from qemu, the kvm_get_htab_header are in native endian, but the ptes are big endian. This patch fixes restore on a KVM LE host. Qemu also needs a fix for this : http://lists.nongnu.org/archive/html/qemu-ppc/2014-11/msg8.html Signed-off-by: Cédric Le Goater c...@fr.ibm.com Cc: Paul Mackerras pau...@samba.org Cc: Alexey Kardashevskiy a...@ozlabs.ru Cc: Gregory Kurz gk...@linux.vnet.ibm.com --- Tested on 3.18-rc5 with LE and BE host. v2: add be64 local variables to be friendly with sparse arch/powerpc/kvm/book3s_64_mmu_hv.c |8 +++- 1 file changed, 7 insertions(+), 1 deletion(-) Index: linux-3.18-hv.git/arch/powerpc/kvm/book3s_64_mmu_hv.c === --- linux-3.18-hv.git.orig/arch/powerpc/kvm/book3s_64_mmu_hv.c +++ linux-3.18-hv.git/arch/powerpc/kvm/book3s_64_mmu_hv.c @@ -1539,9 +1539,15 @@ static ssize_t kvm_htab_write(struct fil hptp = (__be64 *)(kvm-arch.hpt_virt + (i * HPTE_SIZE)); lbuf = (unsigned long __user *)buf; for (j = 0; j hdr.n_valid; ++j) { + __be64 hpte_v; + __be64 hpte_r; + err = -EFAULT; - if (__get_user(v, lbuf) || __get_user(r, lbuf + 1)) + if (__get_user(hpte_v, lbuf) || + __get_user(hpte_r, lbuf + 1)) goto out; + v = be64_to_cpu(hpte_v); + r = be64_to_cpu(hpte_r); err = -EINVAL; if (!(v HPTE_V_VALID)) goto out; -- To unsubscribe from this list: send the line unsubscribe kvm-ppc in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html