Luca Tettamanti wrote:
> Hi,
> I just got this warning when doing a 'rmmod kvm-intel':
>
> =================================
> [ INFO: inconsistent lock state ]
> 2.6.23-rc3-libata-g28e8351a-dirty #93
> ---------------------------------
> inconsistent {hardirq-on-W} -> {in-hardirq-W} usage.
> udevd/2800 [HC1[1]:SC0[0]:HE0:SE1] takes:
>  (kvm_lock){+-..}, at: [<f9c8718b>] hardware_disable+0x31/0xbc [kvm]
> {hardirq-on-W} state was registered at:
>   [<c013e95d>] __lock_acquire+0x487/0xbcc
>   [<c013e1e1>] debug_check_no_locks_freed+0x110/0x11a
>   [<c01cd920>] __next_cpu+0x12/0x21
>   [<c0146ab6>] is_module_address+0x35/0x92
>   [<c013f490>] lock_acquire+0x68/0x80
>   [<f9c87039>] kvm_dev_ioctl+0xb5/0x1c8 [kvm]
>   [<c02f19ce>] _spin_lock+0x36/0x5f
>   [<f9c87039>] kvm_dev_ioctl+0xb5/0x1c8 [kvm]
>   [<f9c87039>] kvm_dev_ioctl+0xb5/0x1c8 [kvm]
>   [<c0119386>] do_page_fault+0x30d/0x734
>   [<f9c86f84>] kvm_dev_ioctl+0x0/0x1c8 [kvm]
>   [<c01756cf>] do_ioctl+0x1f/0x62
>   [<c0175949>] vfs_ioctl+0x237/0x249
>   [<c013e0ae>] trace_hardirqs_on+0x11a/0x13d
>   [<c017598e>] sys_ioctl+0x33/0x4d
>   [<c0103f3e>] syscall_call+0x7/0xb
>   [<ffffffff>] 0xffffffff
> irq event stamp: 682504
> hardirqs last  enabled at (682503): [<c0151ce8>] 
> get_page_from_freelist+0x18c/0x35c
> hardirqs last disabled at (682504): [<c01049d5>] 
> call_function_interrupt+0x29/0x38
> softirqs last  enabled at (679024): [<c01067ed>] do_softirq+0x61/0xd1
> softirqs last disabled at (679017): [<c01067ed>] do_softirq+0x61/0xd1
>
> other info that might help us debug this:
> 1 lock held by udevd/2800:
>  #0:  (&inode->i_mutex){--..}, at: [<c01660d8>] shmem_file_write+0x7e/0x2a6
>
> stack backtrace:
>  [<c013d370>] print_usage_bug+0x138/0x142
>  [<c013db02>] mark_lock+0xae/0x44a
>  [<c02f1fc9>] _spin_unlock+0x25/0x3b
>  [<c013e8ec>] __lock_acquire+0x416/0xbcc
>  [<c013f490>] lock_acquire+0x68/0x80
>  [<f9c8718b>] hardware_disable+0x31/0xbc [kvm]
>  [<c02f19ce>] _spin_lock+0x36/0x5f
>  [<f9c8718b>] hardware_disable+0x31/0xbc [kvm]
>  [<f9c8715a>] hardware_disable+0x0/0xbc [kvm]
>  [<f9c8718b>] hardware_disable+0x31/0xbc [kvm]
>  [<c013c3c8>] lock_release_holdtime+0x3d/0x4a
>  [<f9c8715a>] hardware_disable+0x0/0xbc [kvm]
>  [<c0113bcf>] smp_call_function_interrupt+0x37/0x52
>  [<c01049df>] call_function_interrupt+0x33/0x38
>  [<c01100d8>] p4_rearm+0x4/0xf2c
>  [<c0151df1>] get_page_from_freelist+0x295/0x35c
>  [<c0151f17>] __alloc_pages+0x5f/0x295
>  [<c013c377>] put_lock_stats+0xa/0x1e
>  [<c0165816>] shmem_getpage+0x38c/0x59a
>  [<c02f06bf>] __mutex_lock_slowpath+0x290/0x298
>  [<c0166185>] shmem_file_write+0x12b/0x2a6
>  [<c016605a>] shmem_file_write+0x0/0x2a6
>  [<c016be5a>] vfs_write+0x8a/0x10c
>  [<c016c3d3>] sys_write+0x41/0x67
>  [<c0103f3e>] syscall_call+0x7/0xb
>  =======================
>
> (this is kvm-36, but the bug was there in previous versions too)
>
> kvm_lock is taken in kvm_create_vm with interrupts enabled. Later - at
> rmmod time - it's used in decache_vcpus_on_cpu (hardware_disable) with
> interrupts disabled (due to on_each_cpu). In theory it should be
> harmless since the refcount on /dev/kvm prevents the deadlock.
> Switching to irq-safe spin_lock_* seems fine, do you want a patch?
>
>   

Maybe hoisting the lock out of decache_vcpus_on_cpus() onto the callers 
of hardware_disable() will be better.  We can have a 
__hardware_disable() which does not take the lock (for use in 
on_each_cpu or smp_call_function_single) and a hardware_disable() which 
does take the lock.

But there may be better alternatives.

-- 
error compiling committee.c: too many arguments to function


-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel

Reply via email to