[Bug 103131] New: Forgotten stack pushes with KVM_MEM_READONLY

2015-08-19 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=103131

Bug ID: 103131
   Summary: Forgotten stack pushes with KVM_MEM_READONLY
   Product: Virtualization
   Version: unspecified
Kernel Version: 4.1.5
  Hardware: x86-64
OS: Linux
  Tree: Mainline
Status: NEW
  Severity: normal
  Priority: P1
 Component: kvm
  Assignee: virtualization_...@kernel-bugs.osdl.org
  Reporter: felix.vo...@posteo.de
Regression: No

Created attachment 185201
  -- https://bugzilla.kernel.org/attachment.cgi?id=185201action=edit
Test program (C99)

I found this bug when I wanted to use KVM_MEM_READONLY to capture all memory
writes in my hypervisor.

Attached test program output when ran with the argument 0 (no flags):

 vm exit from f000:f006 [cs base 0x000f, pc=0x000ff006]
 io: out 2 bytes x1 @0xbeef: fa 7f
 vm exit from f000:fffb [cs base 0x000f, pc=0x000b]
 halted

Output when ran with 2 (KVM_MEM_READONLY):

 vm exit from f000:f000 [cs base 0x000f, pc=0x000ff000]
 write 2 bytes at 0x7ffa: fa ff 00 00 00 00 00 00
 vm exit from f000:f006 [cs base 0x000f, pc=0x000ff006]
 io: out 2 bytes x1 @0xbeef: fa 7f
 vm exit from f4f4:fffa [cs base 0x000f4f40, pc=0x00104f3a]
 internal error, suberror 0x1

In real mode, doing an INT call is roughly equivalent to pushing the flags
register, CS, IP and then jumping to the appropriate handler listed in the IVT.
As you can see from above, when KVM_MEM_READONLY flag is set, only the pushing
IP part is captured by the hypervisor; the other memory writes are forgotten
(although the stack pointer is updated accordingly). This causes a later IRET
to return to the wrong segment (never mind with the wrong flags) and the
virtual machine to crash.

I don't know if there are any security implications; I quite doubt it to be
honest, but if anyone wants to design a cutesy logo, please do.

-- 
You are receiving this mail because:
You are watching the assignee of the bug.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Bug 103141] New: Host-triggerable NULL pointer oops

2015-08-19 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=103141

Bug ID: 103141
   Summary: Host-triggerable NULL pointer oops
   Product: Virtualization
   Version: unspecified
Kernel Version: 4.1.5
  Hardware: x86-64
OS: Linux
  Tree: Mainline
Status: NEW
  Severity: normal
  Priority: P1
 Component: kvm
  Assignee: virtualization_...@kernel-bugs.osdl.org
  Reporter: felix.vo...@posteo.de
Regression: No

Created attachment 185241
  -- https://bugzilla.kernel.org/attachment.cgi?id=185241action=edit
Test program (C99)

Amusingly enough, I found this while trying to come up with a minimal test
program for #103131.

Running ioctl(KVM_CREATE_VCPU) _after_ ioctl(KVM_SET_USER_MEMORY_REGION) with
certain address/size combinations may generate a null pointer dereference.

dmesg after running the test program:

[11557.519426] BUG: unable to handle kernel NULL pointer dereference at
005f
[11557.520561] IP: [a045b2f5] vmx_fpu_activate+0x5/0x20 [kvm_intel]
[11557.521716] PGD 13841a067 PUD 13857c067 PMD 0 
[11557.522891] Oops:  [#25] PREEMPT SMP 
[11557.524073] Modules linked in: [REDACTED]
[11557.534572] CPU: 5 PID: 4295 Comm: tcc Tainted: P  DO   
4.1.5-1-ARCH #1
[11557.536451] Hardware name: [REDACTED]
[11557.538361] task: 880068425180 ti: 880138784000 task.ti:
880138784000
[11557.540331] RIP: 0010:[a045b2f5]  [a045b2f5]
vmx_fpu_activate+0x5/0x20 [kvm_intel]
[11557.542367] RSP: 0018:880138787da0  EFLAGS: 00010292
[11557.544411] RAX: a0476160 RBX: ffef RCX:

[11557.546476] RDX: 1f85 RSI: 88014b15e8b0 RDI:
ffef
[11557.548553] RBP: 880138787db8 R08: 0001e8b0 R09:
a045cbf3
[11557.550605] R10: ea00027eee00 R11: 88014b157348 R12:

[11557.552637] R13:  R14: ae41 R15:

[11557.554691] FS:  7fba3936d700() GS:88014b14()
knlGS:
[11557.556796] CS:  0010 DS:  ES:  CR0: 80050033
[11557.558914] CR2: 005f CR3: 00013857d000 CR4:
000426e0
[11557.561092] Stack:
[11557.563213]  a03deaf1  8800a52fc000
880138787e78
[11557.565412]  a03ca6d8 880138787de8 81175b5b
88011edffb80
[11557.567650]   fffbc000 00044000
7fba39371000
[11557.569906] Call Trace:
[11557.572169]  [a03deaf1] ? kvm_arch_vcpu_create+0x51/0x70 [kvm]
[11557.574476]  [a03ca6d8] kvm_vm_ioctl+0x1c8/0x7a0 [kvm]
[11557.576773]  [81175b5b] ?
lru_cache_add_active_or_unevictable+0x2b/0xb0
[11557.579118]  [811f4646] do_vfs_ioctl+0x2c6/0x4d0
[11557.581470]  [811f48d1] SyS_ioctl+0x81/0xa0
[11557.583841]  [8158bf2e] system_call_fastpath+0x12/0x71
[11557.586265] Code: 00 e8 20 bf ff ff 5b 41 5c 5d c3 0f 1f 00 48 8b 05 31 85
fc ff ff 90 b8 00 00 00 eb 87 66 0f 1f 84 00 00 00 00 00 66 66 66 66 90 8b 47
70 85 c0 75 0a 55 48 89 e5 e8 3b ff ff ff 5d f3 c3 0f 1f 
[11557.592112] RIP  [a045b2f5] vmx_fpu_activate+0x5/0x20 [kvm_intel]
[11557.594990]  RSP 880138787da0
[11557.597859] CR2: 005f
[11557.600786] ---[ end trace b28b93d27b3449c9 ]---

When I move ioctl(KVM_CREATE_VCPU) immediately below ioctl(KVM_CREATE_VM) there
is no oops, but a later KVM_RUN exits with KVM_EXIT_INTERNAL_ERROR, subcode
KVM_INTERNAL_ERROR_EMULATION. The crashes also stop when I decrease
umr.memory_size below what I specified in the attached test program.

-- 
You are receiving this mail because:
You are watching the assignee of the bug.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4 1/2] arm64: KVM: Optimize arm64 skip 30-50% vfp/simd save/restore on exits

2015-08-19 Thread Christoffer Dall
Hi Mario,

On Wed, Aug 05, 2015 at 05:11:37PM +0100, Marc Zyngier wrote:
 On 16/07/15 22:29, Mario Smarduch wrote:
  This patch only saves and restores FP/SIMD registers on Guest access. To do
  this cptr_el2 FP/SIMD trap is set on Guest entry and later checked on exit.
  lmbench, hackbench show significant improvements, for 30-50% exits FP/SIMD
  context is not saved/restored
  
  Signed-off-by: Mario Smarduch m.smard...@samsung.com
 
 So this patch seems to break 32bit guests on arm64.  I've had a look,
 squashed a few bugs that I dangerously overlooked during the review, but
 it still doesn't work (it doesn't crash anymore, but I get random
 illegal VFP instructions in 32bit guests).
 
 I'd be glad if someone could eyeball the following patch and tell me
 what's going wrong. If we don't find the root cause quickly enough, I'll
 have to drop the series from -next, and that'd be a real shame.
 
 Thanks,
 
   M.
 
 commit 5777dc55fbc170426a85e00c26002dd5a795cfa5
 Author: Marc Zyngier marc.zyng...@arm.com
 Date:   Wed Aug 5 16:53:01 2015 +0100
 
 KVM: arm64: NOTAFIX: Prevent crash when 32bit guest uses VFP
 
 Since we switch FPSIMD in a lazy way, access to FPEXC32_EL2
 must be guarded by skip_fpsimd_state. Otherwise, all hell
 break loose.
 
 Also, FPEXC32_EL2 must be restored when we trap to EL2 to
 enable floating point.
 
 Note that while it prevents the host from catching fire, the
 guest still doesn't work properly, and I don't understand why just
 yet.
 
 Not-really-signed-off-by: Marc Zyngier marc.zyng...@arm.com
 
 diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
 index c8e0c70..b53ec5d 100644
 --- a/arch/arm64/kvm/hyp.S
 +++ b/arch/arm64/kvm/hyp.S
 @@ -431,10 +431,12 @@
   add x3, x2, #CPU_SYSREG_OFFSET(DACR32_EL2)
   mrs x4, dacr32_el2
   mrs x5, ifsr32_el2
 - mrs x6, fpexc32_el2
   stp x4, x5, [x3]
 - str x6, [x3, #16]
 
 + skip_fpsimd_state x8, 3f
 + mrs x6, fpexc32_el2
 + str x6, [x3, #16]
 +3:
   skip_debug_state x8, 2f
   mrs x7, dbgvcr32_el2
   str x7, [x3, #24]
 @@ -461,10 +463,8 @@
 
   add x3, x2, #CPU_SYSREG_OFFSET(DACR32_EL2)
   ldp x4, x5, [x3]
 - ldr x6, [x3, #16]
   msr dacr32_el2, x4
   msr ifsr32_el2, x5
 - msr fpexc32_el2, x6
 
   skip_debug_state x8, 2f
   ldr x7, [x3, #24]
 @@ -669,12 +669,14 @@ __restore_debug:
   ret
 
  __save_fpsimd:
 + skip_fpsimd_state x3, 1f
   save_fpsimd
 - ret
 +1:   ret
 
  __restore_fpsimd:
 + skip_fpsimd_state x3, 1f
   restore_fpsimd
 - ret
 +1:   ret
 
  switch_to_guest_fpsimd:
   pushx4, lr
 @@ -682,6 +684,7 @@ switch_to_guest_fpsimd:
   mrs x2, cptr_el2
   bic x2, x2, #CPTR_EL2_TFP
   msr cptr_el2, x2
 + isb
 
   mrs x0, tpidr_el2
 
 @@ -692,6 +695,10 @@ switch_to_guest_fpsimd:
   add x2, x0, #VCPU_CONTEXT
   bl __restore_fpsimd
 
 + skip_32bit_state x3, 1f
 + ldr x4, [x2, #CPU_SYSREG_OFFSET(FPEXC32_EL2)]
 + msr fpexc32_el2, x4
 +1:
   pop x4, lr
   pop x2, x3
   pop x0, x1
 @@ -754,9 +761,7 @@ __kvm_vcpu_return:
   add x2, x0, #VCPU_CONTEXT
 
   save_guest_regs
 - skip_fpsimd_state x3, 1f
   bl __save_fpsimd
 -1:
   bl __save_sysregs
 
   skip_debug_state x3, 1f
 @@ -777,9 +782,7 @@ __kvm_vcpu_return:
   kern_hyp_va x2
 
   bl __restore_sysregs
 - skip_fpsimd_state x3, 1f
   bl __restore_fpsimd
 -1:
   /* Clear FPSIMD and Trace trapping */
   msr cptr_el2, xzr
 
 

Marc and I have hunted down the issue at KVM Forum and we believe we've
found the issue.  Please have a look at the following follow-up patch to
Marc's patch above:

diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
index 8b2a73b4..842e727 100644
--- a/arch/arm64/kvm/hyp.S
+++ b/arch/arm64/kvm/hyp.S
@@ -769,11 +769,26 @@
 
 .macro activate_traps
ldr x2, [x0, #VCPU_HCR_EL2]
+
+   /*
+* We are about to set CPTR_EL2.TFP to trap all floating point
+* register accesses to EL2, however, the ARM ARM clearly states that
+* traps are only taken to EL2 if the operation would not otherwise
+* trap to EL1.  Therefore, always make sure that for 32-bit guests,
+* we set FPEXC.EN to prevent traps to EL1, when setting the TFP bit.
+*/
+   tbnzx2, #HCR_RW_SHIFT, 99f // open code skip_32bit_state
+   mov x3, #(1  30)
+   msr fpexc32_el2, x3
+   isb
+99:
+
msr hcr_el2, x2
mov x2, #CPTR_EL2_TTA
orr x2, x2, #CPTR_EL2_TFP
msr cptr_el2, x2
 
+
mov x2, #(1  15)  // Trap CP15 Cr=15
msr hstr_el2, x2
 


Thanks,
-Christoffer
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  

Re: PING: [PATCH] KVM: arm64: Decode basic HYP fault information

2015-08-19 Thread Christoffer Dall
Hi Pavel,

It's only been a week and we're approaching the merge window.

Both Marc and I are well aware that you sent this patch.

Please be more patient.  For example, if a month or more passes without
you hearing anything, then it's ok to ask what the plans are.

-Christoffer

On Wed, Aug 19, 2015 at 10:26:34AM +0300, Pavel Fedin wrote:
 PING
 
 Kind regards,
 Pavel Fedin
 Expert Engineer
 Samsung Electronics Research center Russia
 
 
  -Original Message-
  From: kvm-ow...@vger.kernel.org [mailto:kvm-ow...@vger.kernel.org] On 
  Behalf Of Pavel Fedin
  Sent: Tuesday, August 11, 2015 10:34 AM
  To: kvm...@lists.cs.columbia.edu; kvm@vger.kernel.org
  Cc: 'Christoffer Dall'; 'Marc Zyngier'
  Subject: [PATCH] KVM: arm64: Decode basic HYP fault information
  
  Print exception vector name, exception class and PC translated to EL1 
  virtual
  address. Significantly aids debugging HYP crashes without special means like
  JTAG.
  
  Signed-off-by: Pavel Fedin p.fe...@samsung.com
  ---
   arch/arm64/kvm/handle_exit.c | 30 +
   arch/arm64/kvm/hyp.S | 46 
  +---
   2 files changed, 48 insertions(+), 28 deletions(-)
  
  diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
  index 29b184a..4d70d64 100644
  --- a/arch/arm64/kvm/handle_exit.c
  +++ b/arch/arm64/kvm/handle_exit.c
  @@ -136,3 +136,33 @@ int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run 
  *run,
  return 0;
  }
   }
  +
  +static const char *const hyp_faults[] = {
  +   EL2t Synchronous,
  +   EL2t IRQ,
  +   EL2t FIQ,
  +   EL2t Error,
  +   EL2h Synchronous,
  +   EL2h IRQ,
  +   EL2h FIQ,
  +   EL2h Error,
  +   EL1 Synchronous,
  +   EL1 IRQ,
  +   EL1 FIQ,
  +   EL1 Error
  +};
  +
  +void kvm_hyp_panic(unsigned long vector, unsigned int spsr, unsigned long 
  pc,
  +  unsigned int esr, unsigned long far, unsigned long hpfar,
  +  unsigned long par, struct kvm_vcpu *vcpu)
  +{
  +   pr_emerg(Unhandled HYP exception %s on VCPU %p\n,
  +   hyp_faults[vector], vcpu);
  +   pr_emerg(PC : %016lx SPSR : %08x ESR: %08x\n, pc, spsr, esr);
  +   pr_emerg(FAR: %016lx HPFAR: %016lx PAR: %016lx\n, far, hpfar, par);
  +
  +   pr_emerg(Exception class: %02x Translated PC: %016lx\n,
  +   esr  ESR_ELx_EC_SHIFT, pc - HYP_PAGE_OFFSET + PAGE_OFFSET);
  +
  +   panic(HYP panic);
  +}
  diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
  index c81eaaf..62785cd 100644
  --- a/arch/arm64/kvm/hyp.S
  +++ b/arch/arm64/kvm/hyp.S
  @@ -1060,13 +1060,11 @@ __kvm_hyp_panic:
  ldr x2, [x0, #VCPU_HOST_CONTEXT]
  kern_hyp_va x2
  
  +   mov x0, lr
  bl __restore_sysregs
  +   mov lr, x0
  
  -1: adr x0, __hyp_panic_str
  -   adr x1, 2f
  -   ldp x2, x3, [x1]
  -   sub x0, x0, x2
  -   add x0, x0, x3
  +1: mov x0, lr
  mrs x1, spsr_el2
  mrs x2, elr_el2
  mrs x3, esr_el2
  @@ -1078,20 +1076,11 @@ __kvm_hyp_panic:
  mov lr, #(PSR_F_BIT | PSR_I_BIT | PSR_A_BIT | PSR_D_BIT |\
PSR_MODE_EL1h)
  msr spsr_el2, lr
  -   ldr lr, =panic
  +   ldr lr, =kvm_hyp_panic
  msr elr_el2, lr
  eret
  -
  -   .align  3
  -2: .quad   HYP_PAGE_OFFSET
  -   .quad   PAGE_OFFSET
   ENDPROC(__kvm_hyp_panic)
  
  -__hyp_panic_str:
  -   .ascii  HYP panic:\nPS:%08x PC:%p ESR:%p\nFAR:%p HPFAR:%p 
  PAR:%p\nVCPU:%p\n\0
  -
  -   .align  2
  -
   /*
* u64 kvm_call_hyp(void *hypfn, ...);
*
  @@ -1115,26 +1104,27 @@ ENTRY(kvm_call_hyp)
  ret
   ENDPROC(kvm_call_hyp)
  
  -.macro invalid_vector  label, target
  +.macro invalid_vector  label, N, target
  .align  2
   \label:
  +   mov lr, #\N
  b \target
   ENDPROC(\label)
   .endm
  
  /* None of these should ever happen */
  -   invalid_vector  el2t_sync_invalid, __kvm_hyp_panic
  -   invalid_vector  el2t_irq_invalid, __kvm_hyp_panic
  -   invalid_vector  el2t_fiq_invalid, __kvm_hyp_panic
  -   invalid_vector  el2t_error_invalid, __kvm_hyp_panic
  -   invalid_vector  el2h_sync_invalid, __kvm_hyp_panic
  -   invalid_vector  el2h_irq_invalid, __kvm_hyp_panic
  -   invalid_vector  el2h_fiq_invalid, __kvm_hyp_panic
  -   invalid_vector  el2h_error_invalid, __kvm_hyp_panic
  -   invalid_vector  el1_sync_invalid, __kvm_hyp_panic
  -   invalid_vector  el1_irq_invalid, __kvm_hyp_panic
  -   invalid_vector  el1_fiq_invalid, __kvm_hyp_panic
  -   invalid_vector  el1_error_invalid, __kvm_hyp_panic
  +   invalid_vector  el2t_sync_invalid, 0, __kvm_hyp_panic
  +   invalid_vector  el2t_irq_invalid, 1, __kvm_hyp_panic
  +   invalid_vector  el2t_fiq_invalid, 2, __kvm_hyp_panic
  +   invalid_vector  el2t_error_invalid, 3, __kvm_hyp_panic
  +   invalid_vector  el2h_sync_invalid, 4, __kvm_hyp_panic
  +   invalid_vector  el2h_irq_invalid, 5, __kvm_hyp_panic
  +   invalid_vector  el2h_fiq_invalid, 6, __kvm_hyp_panic
  +   invalid_vector  

Re: [PATCH v4 1/2] arm64: KVM: Optimize arm64 skip 30-50% vfp/simd save/restore on exits

2015-08-19 Thread Marc Zyngier
On Wed, 19 Aug 2015 14:52:08 -0700
Mario Smarduch m.smard...@samsung.com wrote:

 Hi Christoffer,
I'll test it and work with it.

FWIW, I've added these patches to both -queue and -next, and from the
tests Christoffer has run, it looks pretty good.

Thanks,

M.
-- 
Jazz is not dead. It just smells funny.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Bug 103141] Host-triggerable NULL pointer oops

2015-08-19 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=103141

Wanpeng Li wanpeng...@hotmail.com changed:

   What|Removed |Added

 CC||wanpeng...@hotmail.com

--- Comment #1 from Wanpeng Li wanpeng...@hotmail.com ---
The below commit can fix it.

commit 370777daab3f024f1645177039955088e2e9ae73
Author: Radim Krčmář rkrc...@redhat.com
Date:   Fri Jul 3 15:49:28 2015 +0200

KVM: VMX: fix vmwrite to invalid VMCS

fpu_activate is called outside of vcpu_load(), which means it should not
touch VMCS, but fpu_activate needs to.  Avoid the call by moving it to a
point where we know that the guest needs eager FPU and VMCS is loaded.

This will get rid of the following trace

 vmwrite error: reg 6800 value 0 (err 1)
  [8162035b] dump_stack+0x19/0x1b
  [a046c701] vmwrite_error+0x2c/0x2e [kvm_intel]
  [a045f26f] vmcs_writel+0x1f/0x30 [kvm_intel]
  [a04617e5] vmx_fpu_activate.part.61+0x45/0xb0 [kvm_intel]
  [a0461865] vmx_fpu_activate+0x15/0x20 [kvm_intel]
  [a0560b91] kvm_arch_vcpu_create+0x51/0x70 [kvm]
  [a0548011] kvm_vm_ioctl+0x1c1/0x760 [kvm]
  [8118b55a] ? handle_mm_fault+0x49a/0xec0
  [811e47d5] do_vfs_ioctl+0x2e5/0x4c0
  [8127abbe] ? file_has_perm+0xae/0xc0
  [811e4a51] SyS_ioctl+0xa1/0xc0
  [81630949] system_call_fastpath+0x16/0x1b

(Note: we also unconditionally activate FPU in vmx_vcpu_reset(), so the
 removed code added nothing.)

Fixes: c447e76b4cab (kvm/fpu: Enable eager restore kvm FPU for MPX)
Cc: sta...@vger.kernel.org
Reported-by: Vlastimil Holer vlastimil.ho...@gmail.com
Signed-off-by: Radim Krčmář rkrc...@redhat.com
Signed-off-by: Paolo Bonzini pbonz...@redhat.com

-- 
You are receiving this mail because:
You are watching the assignee of the bug.--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4 1/2] arm64: KVM: Optimize arm64 skip 30-50% vfp/simd save/restore on exits

2015-08-19 Thread Mario Smarduch
Great that's even better.

On 8/19/2015 3:28 PM, Marc Zyngier wrote:
 On Wed, 19 Aug 2015 14:52:08 -0700
 Mario Smarduch m.smard...@samsung.com wrote:
 
 Hi Christoffer,
I'll test it and work with it.
 
 FWIW, I've added these patches to both -queue and -next, and from the
 tests Christoffer has run, it looks pretty good.
 
 Thanks,
 
   M.
 
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


One off question: Hot vertical scaling of a KVM?

2015-08-19 Thread ROZUMNY, VICTOR
Hello-

Your site states that one off questions might could be answered via email so 
here I am. 

I have limited knowledge of KVM, but it is my understanding that in order to 
vertically  scale RAM for a KVM the guest needs to be shut down, resized, and 
rebooted, resulting in a 5-10 minute interruption of service.

Is this true and if so do you know of any efforts to change this behavior?

Thank you!

v.

Vic Rozumny | Principal Technical Architect
AIC - ATT Integrated Cloud | Complex Engineering





--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4 1/2] arm64: KVM: Optimize arm64 skip 30-50% vfp/simd save/restore on exits

2015-08-19 Thread Mario Smarduch
Hi Christoffer,
   I'll test it and work with it.

Thanks,
  Mario

On 8/19/2015 10:49 AM, Christoffer Dall wrote:
 Hi Mario,
 
 On Wed, Aug 05, 2015 at 05:11:37PM +0100, Marc Zyngier wrote:
 On 16/07/15 22:29, Mario Smarduch wrote:
 This patch only saves and restores FP/SIMD registers on Guest access. To do
 this cptr_el2 FP/SIMD trap is set on Guest entry and later checked on exit.
 lmbench, hackbench show significant improvements, for 30-50% exits FP/SIMD
 context is not saved/restored

 Signed-off-by: Mario Smarduch m.smard...@samsung.com

 So this patch seems to break 32bit guests on arm64.  I've had a look,
 squashed a few bugs that I dangerously overlooked during the review, but
 it still doesn't work (it doesn't crash anymore, but I get random
 illegal VFP instructions in 32bit guests).

 I'd be glad if someone could eyeball the following patch and tell me
 what's going wrong. If we don't find the root cause quickly enough, I'll
 have to drop the series from -next, and that'd be a real shame.

 Thanks,

  M.

 commit 5777dc55fbc170426a85e00c26002dd5a795cfa5
 Author: Marc Zyngier marc.zyng...@arm.com
 Date:   Wed Aug 5 16:53:01 2015 +0100

 KVM: arm64: NOTAFIX: Prevent crash when 32bit guest uses VFP

 Since we switch FPSIMD in a lazy way, access to FPEXC32_EL2
 must be guarded by skip_fpsimd_state. Otherwise, all hell
 break loose.

 Also, FPEXC32_EL2 must be restored when we trap to EL2 to
 enable floating point.

 Note that while it prevents the host from catching fire, the
 guest still doesn't work properly, and I don't understand why just
 yet.

 Not-really-signed-off-by: Marc Zyngier marc.zyng...@arm.com

 diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
 index c8e0c70..b53ec5d 100644
 --- a/arch/arm64/kvm/hyp.S
 +++ b/arch/arm64/kvm/hyp.S
 @@ -431,10 +431,12 @@
  add x3, x2, #CPU_SYSREG_OFFSET(DACR32_EL2)
  mrs x4, dacr32_el2
  mrs x5, ifsr32_el2
 -mrs x6, fpexc32_el2
  stp x4, x5, [x3]
 -str x6, [x3, #16]

 +skip_fpsimd_state x8, 3f
 +mrs x6, fpexc32_el2
 +str x6, [x3, #16]
 +3:
  skip_debug_state x8, 2f
  mrs x7, dbgvcr32_el2
  str x7, [x3, #24]
 @@ -461,10 +463,8 @@

  add x3, x2, #CPU_SYSREG_OFFSET(DACR32_EL2)
  ldp x4, x5, [x3]
 -ldr x6, [x3, #16]
  msr dacr32_el2, x4
  msr ifsr32_el2, x5
 -msr fpexc32_el2, x6

  skip_debug_state x8, 2f
  ldr x7, [x3, #24]
 @@ -669,12 +669,14 @@ __restore_debug:
  ret

  __save_fpsimd:
 +skip_fpsimd_state x3, 1f
  save_fpsimd
 -ret
 +1:  ret

  __restore_fpsimd:
 +skip_fpsimd_state x3, 1f
  restore_fpsimd
 -ret
 +1:  ret

  switch_to_guest_fpsimd:
  pushx4, lr
 @@ -682,6 +684,7 @@ switch_to_guest_fpsimd:
  mrs x2, cptr_el2
  bic x2, x2, #CPTR_EL2_TFP
  msr cptr_el2, x2
 +isb

  mrs x0, tpidr_el2

 @@ -692,6 +695,10 @@ switch_to_guest_fpsimd:
  add x2, x0, #VCPU_CONTEXT
  bl __restore_fpsimd

 +skip_32bit_state x3, 1f
 +ldr x4, [x2, #CPU_SYSREG_OFFSET(FPEXC32_EL2)]
 +msr fpexc32_el2, x4
 +1:
  pop x4, lr
  pop x2, x3
  pop x0, x1
 @@ -754,9 +761,7 @@ __kvm_vcpu_return:
  add x2, x0, #VCPU_CONTEXT

  save_guest_regs
 -skip_fpsimd_state x3, 1f
  bl __save_fpsimd
 -1:
  bl __save_sysregs

  skip_debug_state x3, 1f
 @@ -777,9 +782,7 @@ __kvm_vcpu_return:
  kern_hyp_va x2

  bl __restore_sysregs
 -skip_fpsimd_state x3, 1f
  bl __restore_fpsimd
 -1:
  /* Clear FPSIMD and Trace trapping */
  msr cptr_el2, xzr


 
 Marc and I have hunted down the issue at KVM Forum and we believe we've
 found the issue.  Please have a look at the following follow-up patch to
 Marc's patch above:
 
 diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
 index 8b2a73b4..842e727 100644
 --- a/arch/arm64/kvm/hyp.S
 +++ b/arch/arm64/kvm/hyp.S
 @@ -769,11 +769,26 @@
  
  .macro activate_traps
   ldr x2, [x0, #VCPU_HCR_EL2]
 +
 + /*
 +  * We are about to set CPTR_EL2.TFP to trap all floating point
 +  * register accesses to EL2, however, the ARM ARM clearly states that
 +  * traps are only taken to EL2 if the operation would not otherwise
 +  * trap to EL1.  Therefore, always make sure that for 32-bit guests,
 +  * we set FPEXC.EN to prevent traps to EL1, when setting the TFP bit.
 +  */
 + tbnzx2, #HCR_RW_SHIFT, 99f // open code skip_32bit_state
 + mov x3, #(1  30)
 + msr fpexc32_el2, x3
 + isb
 +99:
 +
   msr hcr_el2, x2
   mov x2, #CPTR_EL2_TTA
   orr x2, x2, #CPTR_EL2_TFP
   msr cptr_el2, x2
  
 +
   mov x2, #(1  15)  // Trap CP15 Cr=15
   msr hstr_el2, x2
  
 
 
 Thanks,
 -Christoffer
 
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to 

Re: [RFC PATCH 0/5] KVM: x86: exit to user space on unhandled MSR accesses

2015-08-19 Thread Bandan Das
Peter Hornyack peterhorny...@google.com writes:

 There are numerous MSRs that kvm does not currently handle. On Intel
 platforms we have observed guest VMs accessing some of these MSRs (for
 example, MSR_PLATFORM_INFO) and behaving poorly (to the point of guest OS
 crashes) when they receive a GP fault because the MSR is not emulated. This
 patchset adds a new kvm exit path for unhandled MSR accesses that allows
 user space to emulate additional MSRs without having to implement them in
 kvm.

^^ So, I am trying to understand this motivation. A while back when 
a patch was posted to emulate MSR_PLATFORM_INFO, it was rejected.
Why ? Because it seemed impossible to emulate it correctly (most concerns were
related to migration iirc). Although I haven't reviewed all patches in this 
series
yet, what I understand from the above message is-It's ok to emulate
MSR_PLATFORM_INFO *incorrectly* as long as we are doing it in the user space.

I understand the part where it makes sense to move stuff to userspace.
But if kvm isn't emulating certain msrs yet, either we should add support,
or they haven't been added because it's not possible to emulate them
correctly. The logic that it's probably ok to let userspace do the (incorrect)
emulation is something I don't understand. It seems like the next in line
is to let userspace emulate thier own version of unimplemented x86 instructions.


 The core of the patchset modifies the vmx handle_rdmsr and handle_wrmsr
 functions to exit to user space on MSR reads/writes that kvm can't handle
 itself. Then, on the return path into kvm we check for outstanding user
 space MSR completions and either complete the MSR access successfully or
 inject a GP fault as kvm would do by default. This new exit path must be
 enabled for the vm via the KVM_CAP_UNHANDLED_MSR_EXITS capability.

 In the future we plan to extend this functionality to allow user space to
 register the MSRs that it would like to handle itself, even if kvm already
 provides an implementation. In the long-term we will move the

I seriously hope we don't do this!

Bandan
 implementation of all non-performance-sensitive MSRs to user space,
 reducing the potential attack surface of kvm and allowing us to respond to
 bugs more quickly.

 This patchset has been tested with our non-qemu user space hypervisor on
 vmx platforms; svm support is not implemented.

 Peter Hornyack (5):
   KVM: x86: refactor vmx rdmsr/wrmsr completion into new functions
   KVM: add KVM_EXIT_MSR exit reason and capability.
   KVM: x86: add msr_exits_supported to kvm_x86_ops
   KVM: x86: enable unhandled MSR exits for vmx
   KVM: x86: add trace events for unhandled MSR exits

  Documentation/virtual/kvm/api.txt |  48 +++
  arch/x86/include/asm/kvm_host.h   |   2 +
  arch/x86/kvm/svm.c|   6 ++
  arch/x86/kvm/trace.h  |  28 +
  arch/x86/kvm/vmx.c| 126 
 ++
  arch/x86/kvm/x86.c|  13 
  include/trace/events/kvm.h|   2 +-
  include/uapi/linux/kvm.h  |  14 +
  8 files changed, 227 insertions(+), 12 deletions(-)
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: One off question: Hot vertical scaling of a KVM?

2015-08-19 Thread Bandan Das
ROZUMNY, VICTOR vr9...@att.com writes:

 Hello-

 Your site states that one off questions might could be answered via
 email so here I am.

 I have limited knowledge of KVM, but it is my understanding that in
 order to vertically scale RAM for a KVM the guest needs to be shut
 down, resized, and rebooted, resulting in a 5-10 minute interruption
 of service.
What's vertical scaling ? You mean, increase the amount of memory
the guest sees ? You could use Qemu memory hotplug I think but that
requires that the guest has been booted with enough number of
dimm slots.

Bandan

 Is this true and if so do you know of any efforts to change this
 behavior?

 Thank you!

 v.

 Vic Rozumny | Principal Technical Architect AIC - ATT Integrated
 Cloud | Complex Engineering





 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


PING: [PATCH] KVM: arm64: Decode basic HYP fault information

2015-08-19 Thread Pavel Fedin
PING

Kind regards,
Pavel Fedin
Expert Engineer
Samsung Electronics Research center Russia


 -Original Message-
 From: kvm-ow...@vger.kernel.org [mailto:kvm-ow...@vger.kernel.org] On Behalf 
 Of Pavel Fedin
 Sent: Tuesday, August 11, 2015 10:34 AM
 To: kvm...@lists.cs.columbia.edu; kvm@vger.kernel.org
 Cc: 'Christoffer Dall'; 'Marc Zyngier'
 Subject: [PATCH] KVM: arm64: Decode basic HYP fault information
 
 Print exception vector name, exception class and PC translated to EL1 virtual
 address. Significantly aids debugging HYP crashes without special means like
 JTAG.
 
 Signed-off-by: Pavel Fedin p.fe...@samsung.com
 ---
  arch/arm64/kvm/handle_exit.c | 30 +
  arch/arm64/kvm/hyp.S | 46 
 +---
  2 files changed, 48 insertions(+), 28 deletions(-)
 
 diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
 index 29b184a..4d70d64 100644
 --- a/arch/arm64/kvm/handle_exit.c
 +++ b/arch/arm64/kvm/handle_exit.c
 @@ -136,3 +136,33 @@ int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run 
 *run,
   return 0;
   }
  }
 +
 +static const char *const hyp_faults[] = {
 + EL2t Synchronous,
 + EL2t IRQ,
 + EL2t FIQ,
 + EL2t Error,
 + EL2h Synchronous,
 + EL2h IRQ,
 + EL2h FIQ,
 + EL2h Error,
 + EL1 Synchronous,
 + EL1 IRQ,
 + EL1 FIQ,
 + EL1 Error
 +};
 +
 +void kvm_hyp_panic(unsigned long vector, unsigned int spsr, unsigned long pc,
 +unsigned int esr, unsigned long far, unsigned long hpfar,
 +unsigned long par, struct kvm_vcpu *vcpu)
 +{
 + pr_emerg(Unhandled HYP exception %s on VCPU %p\n,
 + hyp_faults[vector], vcpu);
 + pr_emerg(PC : %016lx SPSR : %08x ESR: %08x\n, pc, spsr, esr);
 + pr_emerg(FAR: %016lx HPFAR: %016lx PAR: %016lx\n, far, hpfar, par);
 +
 + pr_emerg(Exception class: %02x Translated PC: %016lx\n,
 + esr  ESR_ELx_EC_SHIFT, pc - HYP_PAGE_OFFSET + PAGE_OFFSET);
 +
 + panic(HYP panic);
 +}
 diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
 index c81eaaf..62785cd 100644
 --- a/arch/arm64/kvm/hyp.S
 +++ b/arch/arm64/kvm/hyp.S
 @@ -1060,13 +1060,11 @@ __kvm_hyp_panic:
   ldr x2, [x0, #VCPU_HOST_CONTEXT]
   kern_hyp_va x2
 
 + mov x0, lr
   bl __restore_sysregs
 + mov lr, x0
 
 -1:   adr x0, __hyp_panic_str
 - adr x1, 2f
 - ldp x2, x3, [x1]
 - sub x0, x0, x2
 - add x0, x0, x3
 +1:   mov x0, lr
   mrs x1, spsr_el2
   mrs x2, elr_el2
   mrs x3, esr_el2
 @@ -1078,20 +1076,11 @@ __kvm_hyp_panic:
   mov lr, #(PSR_F_BIT | PSR_I_BIT | PSR_A_BIT | PSR_D_BIT |\
 PSR_MODE_EL1h)
   msr spsr_el2, lr
 - ldr lr, =panic
 + ldr lr, =kvm_hyp_panic
   msr elr_el2, lr
   eret
 -
 - .align  3
 -2:   .quad   HYP_PAGE_OFFSET
 - .quad   PAGE_OFFSET
  ENDPROC(__kvm_hyp_panic)
 
 -__hyp_panic_str:
 - .ascii  HYP panic:\nPS:%08x PC:%p ESR:%p\nFAR:%p HPFAR:%p 
 PAR:%p\nVCPU:%p\n\0
 -
 - .align  2
 -
  /*
   * u64 kvm_call_hyp(void *hypfn, ...);
   *
 @@ -1115,26 +1104,27 @@ ENTRY(kvm_call_hyp)
   ret
  ENDPROC(kvm_call_hyp)
 
 -.macro invalid_vectorlabel, target
 +.macro invalid_vectorlabel, N, target
   .align  2
  \label:
 + mov lr, #\N
   b \target
  ENDPROC(\label)
  .endm
 
   /* None of these should ever happen */
 - invalid_vector  el2t_sync_invalid, __kvm_hyp_panic
 - invalid_vector  el2t_irq_invalid, __kvm_hyp_panic
 - invalid_vector  el2t_fiq_invalid, __kvm_hyp_panic
 - invalid_vector  el2t_error_invalid, __kvm_hyp_panic
 - invalid_vector  el2h_sync_invalid, __kvm_hyp_panic
 - invalid_vector  el2h_irq_invalid, __kvm_hyp_panic
 - invalid_vector  el2h_fiq_invalid, __kvm_hyp_panic
 - invalid_vector  el2h_error_invalid, __kvm_hyp_panic
 - invalid_vector  el1_sync_invalid, __kvm_hyp_panic
 - invalid_vector  el1_irq_invalid, __kvm_hyp_panic
 - invalid_vector  el1_fiq_invalid, __kvm_hyp_panic
 - invalid_vector  el1_error_invalid, __kvm_hyp_panic
 + invalid_vector  el2t_sync_invalid, 0, __kvm_hyp_panic
 + invalid_vector  el2t_irq_invalid, 1, __kvm_hyp_panic
 + invalid_vector  el2t_fiq_invalid, 2, __kvm_hyp_panic
 + invalid_vector  el2t_error_invalid, 3, __kvm_hyp_panic
 + invalid_vector  el2h_sync_invalid, 4, __kvm_hyp_panic
 + invalid_vector  el2h_irq_invalid, 5, __kvm_hyp_panic
 + invalid_vector  el2h_fiq_invalid, 6, __kvm_hyp_panic
 + invalid_vector  el2h_error_invalid, 7, __kvm_hyp_panic
 + invalid_vector  el1_sync_invalid, 8, __kvm_hyp_panic
 + invalid_vector  el1_irq_invalid, 9, __kvm_hyp_panic
 + invalid_vector  el1_fiq_invalid, 10, __kvm_hyp_panic
 + invalid_vector  el1_error_invalid, 11, __kvm_hyp_panic
 
  el1_sync:// Guest trapped 

Re: [PATCH v2 4/5] KVM: add KVM_USER_EXIT vcpu ioctl for userspace exit

2015-08-19 Thread Avi Kivity

On 08/18/2015 10:57 PM, Paolo Bonzini wrote:


On 18/08/2015 11:30, Avi Kivity wrote:

KVM_USER_EXIT in practice should be so rare (at least with in-kernel
LAPIC) that I don't think this matters.  KVM_USER_EXIT is relatively
uninteresting, it only exists to provide an alternative to signals that
doesn't require expensive atomics on each and every KVM_RUN. :(

Ah, so the idea is to remove the cost of changing the signal mask?

Yes, it's explained in the cover letter.


Yes, although it looks like a thread-local operation, it takes a
process-wide lock.

IIRC the lock was only task-wide and uncontended.  Problem is, it's on
the node that created the thread rather than the node that is running
it, and inter-node atomics are really, really slow.


Cached inter-node atomics are (relatively) fast, but I think it really 
is a process-wide lock:


sigprocmask calls:

void __set_current_blocked(const sigset_t *newset)
{
struct task_struct *tsk = current;

spin_lock_irq(tsk-sighand-siglock);
__set_task_blocked(tsk, newset);
spin_unlock_irq(tsk-sighand-siglock);
}

struct sighand_struct {
atomic_tcount;
struct k_sigactionaction[_NSIG];
spinlock_tsiglock;
wait_queue_head_tsignalfd_wqh;
};

Since sigaction is usually process-wide, I conclude that so will 
tsk-sighand.






For guests spanning 1 host NUMA nodes it's not really practical to
ensure that the thread is created on the right node.  Even for guests
that fit into 1 host node, if you rely on AutoNUMA the VCPUs are created
too early for AutoNUMA to have any effect.  And newer machines have
frighteningly small nodes (two nodes per socket, so it's something like
7 pCPUs if you don't have hyper-threading enabled).  True, the NUMA
penalty within the same socket is not huge, but it still costs a few
thousand clock cycles on vmexit.flat and this feature sweeps it away
completely.


I expect most user wakeups are via irqfd, so indeed the performance of
KVM_USER_EXIT is uninteresting.

Yup, either irqfd or KVM_SET_SIGNAL_MSI.

Paolo


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3 2/5] KVM: add KVM_REQ_EXIT request for userspace exit

2015-08-19 Thread Wanpeng Li

On 8/14/15 6:08 PM, Radim Krčmář wrote:

When userspace wants KVM to exit to userspace, it sends a signal.
This has a disadvantage of requiring a change to the signal mask because
the signal needs to be blocked in userspace to stay pending when sending
to self.

Using a request flag allows us to shave 200-300 cycles from every
userspace exit and the speedup grows with NUMA because unblocking
touches shared spinlock.

The disadvantage is that it adds an overhead of one bit check for all
kernel exits.  A quick tracing shows that the ratio of userspace exits
after boot is about 1/5 and in subsequent run of nmap and kernel compile
has about 1/60, so the check should not regress global performance.

All signal_pending() calls are userspace exit requests, so we add a
check for KVM_REQ_EXIT there.  There is one omitted call in kvm_vcpu_run
because KVM_REQ_EXIT is implied in earlier check for requests.


Actually I see more SIGUSR1 signals are intercepted by signal_pending() 
in vcpu_enter_guest() and vcpu_run() w/ win7 guest and kernel_irqchip=off.


Regards,
Wanpeng Li



Signed-off-by: Radim Krčmář rkrc...@redhat.com
---
  arch/x86/kvm/vmx.c   | 2 +-
  arch/x86/kvm/x86.c   | 6 ++
  include/linux/kvm_host.h | 8 +++-
  include/uapi/linux/kvm.h | 1 +
  virt/kvm/kvm_main.c  | 2 +-
  5 files changed, 16 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 40c6180a0ecb..2b789a869ef5 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -5833,7 +5833,7 @@ static int handle_invalid_guest_state(struct kvm_vcpu 
*vcpu)
goto out;
}
  
-		if (signal_pending(current))

+   if (kvm_need_exit(vcpu))
goto out;
if (need_resched())
schedule();
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index e5850076bf7b..c3df7733af09 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -6548,6 +6548,11 @@ static int vcpu_run(struct kvm_vcpu *vcpu)
++vcpu-stat.signal_exits;
break;
}
+   if (unlikely(kvm_has_request(KVM_REQ_EXIT, vcpu))) {
+   r = 0;
+   vcpu-run-exit_reason = KVM_EXIT_REQUEST;
+   break;
+   }
if (need_resched()) {
srcu_read_unlock(kvm-srcu, vcpu-srcu_idx);
cond_resched();
@@ -6684,6 +6689,7 @@ out:
post_kvm_run_save(vcpu);
if (vcpu-sigset_active)
sigprocmask(SIG_SETMASK, sigsaved, NULL);
+   clear_bit(KVM_REQ_EXIT, vcpu-requests);
  
  	return r;

  }
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 52e388367a26..dcc57171e3ec 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -121,7 +121,7 @@ static inline bool is_error_page(struct page *page)
  #define KVM_REQ_UNHALT 6
  #define KVM_REQ_MMU_SYNC   7
  #define KVM_REQ_CLOCK_UPDATE   8
-#define KVM_REQ_KICK   9
+#define KVM_REQ_EXIT   9
  #define KVM_REQ_DEACTIVATE_FPU10
  #define KVM_REQ_EVENT 11
  #define KVM_REQ_APF_HALT  12
@@ -1104,6 +1104,12 @@ static inline bool kvm_check_request(int req, struct 
kvm_vcpu *vcpu)
}
  }
  
+static inline bool kvm_need_exit(struct kvm_vcpu *vcpu)

+{
+   return signal_pending(current) ||
+  kvm_has_request(KVM_REQ_EXIT, vcpu);
+}
+
  extern bool kvm_rebooting;
  
  struct kvm_device {

diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 26daafbba9ec..d996a7cdb4d2 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -184,6 +184,7 @@ struct kvm_s390_skeys {
  #define KVM_EXIT_SYSTEM_EVENT 24
  #define KVM_EXIT_S390_STSI25
  #define KVM_EXIT_IOAPIC_EOI   26
+#define KVM_EXIT_REQUEST  27
  
  /* For KVM_EXIT_INTERNAL_ERROR */

  /* Emulate instruction failed. */
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index d8db2f8fce9c..347899966178 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1914,7 +1914,7 @@ static int kvm_vcpu_check_block(struct kvm_vcpu *vcpu)
}
if (kvm_cpu_has_pending_timer(vcpu))
return -EINTR;
-   if (signal_pending(current))
+   if (kvm_need_exit(vcpu))
return -EINTR;
  
  	return 0;


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html