On Tue, 21 Aug 2007, Kai Germaschewski wrote:

> One consistently recurring problem is
> 
> LustreError: 11169:0:(mdc_locks.c:420:mdc_enqueue()) ldlm_cli_enqueue: -2
> 
> on the client.

Here's another, potentially not even related issue:

I mounted the filesystem (h101:/root) onto another server node into 
/mount/root for testing. Accessing it worked fine with no errors generated 
neither on the client nor server(s). I unmounted it and mounted it 
read-only, and the cd'd into /mount/root and "chroot .".

This gave:

Lustre: Client root-client has started
Lustre: client ffff8103fba00c00 umount complete
Lustre: Client root-client has started
LustreError: 24772:0:(mdc_locks.c:409:mdc_enqueue()) ASSERTION(rc != 
-ENOENT) failed
LustreError: 24772:0:(tracefile.c:433:libcfs_assertion_failed()) LBUG
Lustre: 24772:0:(linux-debug.c:166:libcfs_debug_dumpstack()) showing stack 
for process 24772
bash          R  running task       0 24772  24717                     
(NOTLB)
 ffffffff88017379 ffffffff8042f3b7 ffff810003f07200 ffff8101fabb1528
 ffff810003f07200 ffffffff8042f641 0000000000000000 ffffffff807570e0
 0000000000007715 000000000000772a fffffffffffffe99 ffffffff80816fc0
Call Trace:
 [<ffffffff8042f641>] vt_console_print+0x21a/0x230
 [<ffffffff8042f641>] vt_console_print+0x21a/0x230
 [<ffffffff8021623a>] release_console_sem+0x1a7/0x1eb
 [<ffffffff8021623a>] release_console_sem+0x1a7/0x1eb
 [<ffffffff802921df>] kallsyms_lookup+0x49/0x91
 [<ffffffff802921df>] kallsyms_lookup+0x49/0x91
 [<ffffffff802627ef>] printk_address+0x96/0xa0
 [<ffffffff802904a5>] __module_text_address+0x64/0x73
 [<ffffffff80289451>] __kernel_text_address+0x1a/0x26
 [<ffffffff80289451>] __kernel_text_address+0x1a/0x26
 [<ffffffff802622bc>] dump_trace+0x247/0x274
 [<ffffffff8026231d>] show_trace+0x34/0x47
 [<ffffffff80262408>] _show_stack+0xd8/0xe5
 [<ffffffff8800eba2>] :libcfs:lbug_with_loc+0x79/0xa3
 [<ffffffff880157c9>] :libcfs:trace_refill_stock+0x0/0x63
 [<ffffffff8811d7eb>] :mdc:mdc_enqueue+0x9dc/0x1465
 [<ffffffff88264845>] :lustre:ll_mdc_blocking_ast+0x0/0x4af
 [<ffffffff88186628>] :ptlrpc:ldlm_completion_ast+0x0/0x721
 [<ffffffff88175a47>] :ptlrpc:ldlm_resource_putref+0x192/0x374
 [<ffffffff8811e5cb>] :mdc:mdc_intent_lock+0x357/0x969
 [<ffffffff88186628>] :ptlrpc:ldlm_completion_ast+0x0/0x721
 [<ffffffff88264845>] :lustre:ll_mdc_blocking_ast+0x0/0x4af
 [<ffffffff88183889>] :ptlrpc:ldlm_cancel_lru+0x80/0x32d
 [<ffffffff88172d2c>] :ptlrpc:__ldlm_handle2lock+0x2f5/0x354
 [<ffffffff88262168>] :lustre:ll_i2gids+0x5d/0xfe
 [<ffffffff8826227e>] :lustre:ll_prepare_mdc_op_data+0x75/0xfb
 [<ffffffff88264122>] :lustre:ll_lookup_it+0x347/0x831
 [<ffffffff88264845>] :lustre:ll_mdc_blocking_ast+0x0/0x4af
 [<ffffffff8822e802>] :lustre:ll_intent_drop_lock+0x85/0x93
 [<ffffffff8822ec4b>] :lustre:ll_revalidate_it+0x20c/0xabc
 [<ffffffff88264699>] :lustre:ll_lookup_nd+0x8d/0xfe
 [<ffffffff80220a50>] d_alloc+0x153/0x18f
 [<ffffffff802af88f>] real_lookup+0x7b/0x11e
 [<ffffffff8020cb20>] do_lookup+0x67/0xbf
 [<ffffffff80209afe>] __link_path_walk+0xa44/0xef1
 [<ffffffff80209f97>] __link_path_walk+0xedd/0xef1
 [<ffffffff8020e4e6>] link_path_walk+0x4a/0xbe
 [<ffffffff80225332>] do_filp_open+0x50/0x71
 [<ffffffff8020c98c>] do_path_lookup+0x1ab/0x1cd
 [<ffffffff80221775>] __path_lookup_intent_open+0x54/0x92
 [<ffffffff80219a7a>] open_namei+0x94/0x67e
 [<ffffffff802b02b9>] __user_walk_fd_it+0x48/0x53
 [<ffffffff80226643>] vfs_stat_fd+0x44/0x78
 [<ffffffff80225332>] do_filp_open+0x50/0x71
 [<ffffffff8822e918>] :lustre:ll_intent_release+0x0/0x127
 [<ffffffff802187d2>] do_sys_open+0x44/0xc8
 [<ffffffff802591ce>] system_call+0x7e/0x83

LustreError: dumping log to /tmp/lustre-log.1187721493.24772
LustreError: 24772:0:(linux-debug.c:91:libcfs_run_upcall()) Error -2 
invoking LNET upcall /usr/lib/lustre/lnet_upcall 
LBUG,/home/kai/lustre-1.6.0.1-ql2-rc2/lnet/libcfs/tracefile.c,libcfs_assertion_failed,433;
 
check /proc/sys/lnet/upcall

--Kai


_______________________________________________
Lustre-discuss mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-discuss

Reply via email to