As a quick test I added a printk to the loop, right after the while():

     while (atomic_read(&completed) != needed) {
         printk("kvm_flush_remote_tlbs: completed = %d, needed = %d\n", 
atomic_read(&completed), needed);
         cpu_relax();
         barrier();
     }


This is the output right before a lockup:

Oct 24 16:03:47 bldr-ccm20 kernel: kvm_flush_remote_tlbs: completed = 2, needed 
= 2
Oct 24 16:03:47 bldr-ccm20 kernel: kvm_flush_remote_tlbs: completed = 2, needed 
= 2
Oct 24 16:03:47 bldr-ccm20 kernel: kvm_flush_remote_tlbs: completed = 1, needed 
= 2
Oct 24 16:03:57 bldr-ccm20 last message repeated 105738 times
Oct 24 16:03:57 bldr-ccm20 kernel: BUG: soft lockup detected on CPU#0!
Oct 24 16:03:57 bldr-ccm20 kernel:  [<c044a0b7>] softlockup_tick+0x98/0xa6
Oct 24 16:03:57 bldr-ccm20 kernel:  [<c042cc98>] update_process_times+0x39/0x5c
Oct 24 16:03:57 bldr-ccm20 kernel:  [<c04176ec>] 
smp_apic_timer_interrupt+0x5c/0x64
Oct 24 16:03:57 bldr-ccm20 kernel:  [<c04049bf>] apic_timer_interrupt+0x1f/0x24
Oct 24 16:03:57 bldr-ccm20 kernel:  [<c0424130>] vprintk+0x288/0x2bc
Oct 24 16:03:57 bldr-ccm20 kernel:  [<c0459db7>] follow_page+0x168/0x1b6
Oct 24 16:03:57 bldr-ccm20 kernel:  [<c04d8067>] cfq_slice_async_store+0x5/0x38
Oct 24 16:03:57 bldr-ccm20 kernel:  [<c0459db7>] follow_page+0x168/0x1b6
Oct 24 16:03:57 bldr-ccm20 kernel:  [<c0406406>] do_IRQ+0xa5/0xae
Oct 24 16:03:57 bldr-ccm20 kernel:  [<c040492e>] common_interrupt+0x1a/0x20
Oct 24 16:03:57 bldr-ccm20 kernel:  [<c042417c>] printk+0x18/0x8e
Oct 24 16:03:57 bldr-ccm20 kernel:  [<f89a9812>] 
kvm_flush_remote_tlbs+0xe0/0xf2 [kvm]
...


I'd like to get a solution for RHEL5, so I am attempting to backport 
smp_call_function_mask(). I'm open to other suggestions if you think it is 
corruption or the problem is somewhere else.

thanks,

david


Avi Kivity wrote:
> david ahern wrote:
>> I am trying, unsuccessfully so far, to get a vm running with 4 cpus.
>> It is failing with a soft lockup:
>>
>> BUG: soft lockup detected on CPU#3!
>>  [<c044a05f>] softlockup_tick+0x98/0xa6
>>  [<c042ccd4>] update_process_times+0x39/0x5c
>>  [<c04176ec>] smp_apic_timer_interrupt+0x5c/0x64
>>  [<c04049bf>] apic_timer_interrupt+0x1f/0x24
>>  [<f8a3c800>] kvm_flush_remote_tlbs+0xce/0xdb [kvm]
>>  [<f8a41a72>] kvm_mmu_pte_write+0x1f2/0x368 [kvm]
>>  [<f8a3d335>] emulator_write_emulated_onepage+0x73/0xe6 [kvm]
>>  [<f8a4542c>] x86_emulate_insn+0x20d8/0x3348 [kvm]
>>  [<f8a43106>] x86_decode_insn+0x624/0x872 [kvm]
>>  [<f8a3d764>] emulate_instruction+0x12b/0x258 [kvm]
>>  [<f88af2e4>] handle_exception+0x163/0x23f [kvm_intel]
>>  [<f88af09b>] kvm_handle_exit+0x70/0x8a [kvm_intel]
>>  [<f8a3deae>] kvm_vcpu_ioctl_run+0x234/0x339 [kvm]
>>  [<f8a3e27f>] kvm_vcpu_ioctl+0x0/0xa8f [kvm]
>>  [<f8a3e33c>] kvm_vcpu_ioctl+0xbd/0xa8f [kvm]
>>  [<c0408f60>] save_i387+0x23f/0x273
>>  [<c04db730>] __next_cpu+0x12/0x21
>>  [<c041c97f>] find_busiest_group+0x177/0x462
>>  [<c04031cd>] setup_sigcontext+0x10d/0x190
>>  [<c0453bed>] get_page_from_freelist+0x96/0x310
>>  [<c0453dfd>] get_page_from_freelist+0x2a6/0x310
>>  [<c0415a5c>] flush_tlb_others+0x83/0xb3
>>  [<c0415d63>] flush_tlb_page+0x74/0x77
>>  [<c0454cf1>] set_page_dirty_balance+0x8/0x35
>>  [<c0459c1b>] do_wp_page+0x3a5/0x3bd
>>  [<c042e97e>] dequeue_signal+0x2d/0x9c
>>  [<c045af6b>] __handle_mm_fault+0x81b/0x87b
>>  [<f8a3e27f>] kvm_vcpu_ioctl+0x0/0xa8f [kvm]
>>  [<c0479cac>] do_ioctl+0x1c/0x5d
>>  [<c0479f37>] vfs_ioctl+0x24a/0x25c
>>  [<c0479f91>] sys_ioctl+0x48/0x5f
>>  [<c0403eff>] syscall_call+0x7/0xb
>>
>>
>> I am working with kvm-48, but also tried the 20071020 snapshot. The
>> stuck code is kvm_flush_remote_tlbs():
>>
>>     while (atomic_read(&completed) != needed) {
>>         cpu_relax();
>>         barrier();
>>     }
>>
>> which I take to mean one of the CPUs is not ack'ing the TLB flush
>> request.
>>   
> 
> I don't think it's a cpu not responding.  I've stared at the code for a
> while (we had this before) and the actual IPI/ack is fine.
> 
> What's probably happening is that corruption of the mmu data structures
> is causing kvm_flush_remote_tlbs() to be called repeatedly.  Since it's
> a very slow function, the lockup detector blames it for any lockup it
> sees even though it is innocent.
> 
> [we had exactly this issue before and it was indeed fixed after an rmap
> corruption was corrected]
> 
> 
>> Is this is a known bug and any options to correct it? It works fine
>> with 2 vcpus, but for a comparison with xen I'd like to get the vm
>> working with 4.
>>
>>
>>   
> 
> - please send (privately, it's big) an 'objdump -Sr' of mmu.o
> - what guest are you running?  if it's publicly available, I can try to
> replicate it
> - at what stage does the failure occur?  if it's early on, we can try
> running with AUDIT or DEBUG
> - otherwise, I'll send debugging patches to try and see what's going on
> 

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel

Reply via email to