Hi,

Just saw this when starting a guest with an assigned device.

Cheers,
Mark.

=======================================================
[ INFO: possible circular locking dependency detected ]
2.6.29-0.74.rc3.git3.fc11.x86_64 #1
-------------------------------------------------------
qemu-kvm/3706 is trying to acquire lock:
 (&kvm->lock){--..}, at: [<ffffffffa013a25f>] kvm_emulate_pio+0x1ab/0x1ff [kvm]

but task is already holding lock:
 (&kvm->slots_lock){----}, at: [<ffffffffa013c4c0>] kvm_arch_vcpu_ioctl_run+0x49
7/0x73a [kvm]

which lock already depends on the new lock.

the existing dependency chain (in reverse order) is:

-> #1 (&kvm->slots_lock){----}:
       [<ffffffff8106e9c1>] __lock_acquire+0xaab/0xc41
       [<ffffffff8106ebe4>] lock_acquire+0x8d/0xba
       [<ffffffff813826ae>] down_read+0x4b/0x7f
       [<ffffffffa0137ff2>] kvm_iommu_map_guest+0x62/0xb8 [kvm]
       [<ffffffffa01363ea>] kvm_vm_ioctl+0x3f4/0x7f1 [kvm]
       [<ffffffff810eac30>] vfs_ioctl+0x2a/0x78
       [<ffffffff810eb0e9>] do_vfs_ioctl+0x46b/0x4ab
       [<ffffffff810eb17e>] sys_ioctl+0x55/0x77
       [<ffffffff810112ba>] system_call_fastpath+0x16/0x1b
       [<ffffffffffffffff>] 0xffffffffffffffff

-> #0 (&kvm->lock){--..}:
       [<ffffffff8106e862>] __lock_acquire+0x94c/0xc41
       [<ffffffff8106ebe4>] lock_acquire+0x8d/0xba
       [<ffffffff8138205a>] __mutex_lock_common+0x107/0x39c
       [<ffffffff81382398>] mutex_lock_nested+0x35/0x3a
       [<ffffffffa013a25f>] kvm_emulate_pio+0x1ab/0x1ff [kvm]
       [<ffffffffa015c875>] handle_io+0x6e/0x76 [kvm_intel]
       [<ffffffffa015d202>] kvm_handle_exit+0x1ba/0x1db [kvm_intel]
       [<ffffffffa013c534>] kvm_arch_vcpu_ioctl_run+0x50b/0x73a [kvm]
       [<ffffffffa01344a7>] kvm_vcpu_ioctl+0xfc/0x48b [kvm]
       [<ffffffff810eac30>] vfs_ioctl+0x2a/0x78
       [<ffffffff810eb0e9>] do_vfs_ioctl+0x46b/0x4ab
       [<ffffffff810eb17e>] sys_ioctl+0x55/0x77
       [<ffffffff810112ba>] system_call_fastpath+0x16/0x1b
       [<ffffffffffffffff>] 0xffffffffffffffff

other info that might help us debug this:

2 locks held by qemu-kvm/3706:
 #0:  (&vcpu->mutex){--..}, at: [<ffffffffa0136ceb>] vcpu_load+0x15/0x37 [kvm]
 #1:  (&kvm->slots_lock){----}, at: [<ffffffffa013c4c0>] kvm_arch_vcpu_ioctl_run
+0x497/0x73a [kvm]

stack backtrace:
Pid: 3706, comm: qemu-kvm Not tainted 2.6.29-0.74.rc3.git3.fc11.x86_64 #1
Call Trace:
 [<ffffffff8106dc65>] print_circular_bug_tail+0x71/0x7c
 [<ffffffff8106e862>] __lock_acquire+0x94c/0xc41
 [<ffffffff8106ebe4>] lock_acquire+0x8d/0xba
 [<ffffffffa013a25f>] ? kvm_emulate_pio+0x1ab/0x1ff [kvm]
 [<ffffffff8138205a>] __mutex_lock_common+0x107/0x39c
 [<ffffffffa013a25f>] ? kvm_emulate_pio+0x1ab/0x1ff [kvm]
 [<ffffffffa013a25f>] ? kvm_emulate_pio+0x1ab/0x1ff [kvm]
 [<ffffffff81382398>] mutex_lock_nested+0x35/0x3a
 [<ffffffffa013a25f>] kvm_emulate_pio+0x1ab/0x1ff [kvm]
 [<ffffffffa015b695>] ? kvm_register_read+0x26/0x35 [kvm_intel]
 [<ffffffffa015c875>] handle_io+0x6e/0x76 [kvm_intel]
 [<ffffffffa015d202>] kvm_handle_exit+0x1ba/0x1db [kvm_intel]
 [<ffffffffa013c534>] kvm_arch_vcpu_ioctl_run+0x50b/0x73a [kvm]
 [<ffffffffa01344a7>] kvm_vcpu_ioctl+0xfc/0x48b [kvm]
 [<ffffffff81163618>] ? inode_has_perm+0x6c/0x72
 [<ffffffff810eac30>] vfs_ioctl+0x2a/0x78
 [<ffffffff810eb0e9>] do_vfs_ioctl+0x46b/0x4ab
 [<ffffffff810eb17e>] sys_ioctl+0x55/0x77
 [<ffffffff810112ba>] system_call_fastpath+0x16/0x1b


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to