So, I untarred the kernel on to a fresh ceph fs, then did a make
menuconfig, then unmounted it so I could remount with all the -o options
below, but I didn't get that far.  When I attempted a umount, I got the
following:

Mar  3 12:45:40 node100 kernel: [  349.582955] /srv/cgs/local/ceph-dev/src/ke: 
kill_sb ffff8101a91a1c00
Mar  3 12:45:40 node100 kernel: [  349.696501] BUG: Dentry 
ffff81016c41e820{i=0,n=.7770.tmp} still in use (1) [unmount of ceph ceph]
Mar  3 12:45:40 node100 kernel: [  349.696619] ------------[ cut here 
]------------
Mar  3 12:45:40 node100 kernel: [  349.696678] kernel BUG at 
/build/buildd/linux-2.6.24/fs/dcache.c:648!
Mar  3 12:45:40 node100 kernel: [  349.696737] invalid opcode: 0000 [1] SMP 
Mar  3 12:45:40 node100 kernel: [  349.696864] CPU 0 
Mar  3 12:45:40 node100 kernel: [  349.696951] Modules linked in: ceph btrfs 
libcrc32c nfs nfsd lockd nfs_acl auth_rpcgss sunrpc exportfs autofs4 
iptable_filter ip_tables x_tables ipv6 psmouse serio_raw evdev dcdbas pcspkr 
iTCO_wdt iTCO_vendor_support e752x_edac button edac_core shpchp pci_hotplug 
ext3 jbd ext2 mbcache sr_mod sg cdrom af_packet ehci_hcd e1000 uhci_hcd usbcore 
dm_mirror dm_snapshot dm_mod thermal processor fan fbcon tileblit font bitblit 
softcursor fuse aoe sd_mod mptspi mptscsih mptbase scsi_transport_spi scsi_mod
Mar  3 12:45:40 node100 kernel: [  349.699288] Pid: 8282, comm: umount Not 
tainted 2.6.24-23-server #1
Mar  3 12:45:40 node100 kernel: [  349.699345] RIP: 0010:[<ffffffff802c8a84>]  
[<ffffffff802c8a84>] shrink_dcache_for_umount_subtree+0x274/0x280
Mar  3 12:45:40 node100 kernel: [  349.699462] RSP: 0018:ffff81018d509df8  
EFLAGS: 00010296
Mar  3 12:45:40 node100 kernel: [  349.699518] RAX: 0000000000000068 RBX: 
ffff81016c41e820 RCX: 0000000000000001
Mar  3 12:45:40 node100 kernel: [  349.700916] RDX: ffffffff8058ff68 RSI: 
0000000000000086 RDI: ffffffff8058ff60
Mar  3 12:45:40 node100 kernel: [  349.700974] RBP: ffff81018e0a4750 R08: 
0000000a49539867 R09: 0000000000000000
Mar  3 12:45:40 node100 kernel: [  349.701032] R10: ffff810001044c60 R11: 
0000000000000001 R12: ffff81016c41e880
Mar  3 12:45:40 node100 kernel: [  349.701090] R13: 000000000000593f R14: 
ffff8101a91a1c00 R15: ffff81018d509f38
Mar  3 12:45:40 node100 kernel: [  349.701149] FS:  00007fcaa0e676e0(0000) 
GS:ffffffff805c5000(0000) knlGS:0000000000000000
Mar  3 12:45:40 node100 kernel: [  349.701220] CS:  0010 DS: 0000 ES: 0000 CR0: 
000000008005003b
Mar  3 12:45:40 node100 kernel: [  349.701277] CR2: 00007fcaa09410b6 CR3: 
0000000179983000 CR4: 00000000000006e0
Mar  3 12:45:40 node100 kernel: [  349.701335] DR0: 0000000000000000 DR1: 
0000000000000000 DR2: 0000000000000000
Mar  3 12:45:40 node100 kernel: [  349.701394] DR3: 0000000000000000 DR6: 
00000000ffff0ff0 DR7: 0000000000000400
Mar  3 12:45:40 node100 kernel: [  349.701452] Process umount (pid: 8282, 
threadinfo ffff81018d508000, task ffff81017a936fe0)
Mar  3 12:45:40 node100 kernel: [  349.701523] Stack:  ffff8101a91a1e38 
ffff8101a91a1c00 ffffffff88447a60 00007fffa8e77478
Mar  3 12:45:40 node100 kernel: [  349.701777]  0000000000000000 
ffffffff802c9349 ffff8101a91a1c00 ffffffff802b7369
Mar  3 12:45:40 node100 kernel: [  349.701992]  ffff8101a91a1c00 
0000000000000022 ffff8101a800e000 ffffffff802b7489
Mar  3 12:45:40 node100 kernel: [  349.702161] Call Trace:
Mar  3 12:45:40 node100 kernel: [  349.702266]  [<ffffffff802c9349>] 
shrink_dcache_for_umount+0x29/0x50
Mar  3 12:45:40 node100 kernel: [  349.702326]  [<ffffffff802b7369>] 
generic_shutdown_super+0x19/0x110
Mar  3 12:45:40 node100 kernel: [  349.702388]  [<ffffffff802b7489>] 
kill_anon_super+0x9/0x40
Mar  3 12:45:40 node100 kernel: [  349.702460]  [<ffffffff88415414>] 
:ceph:ceph_kill_sb+0x64/0xc0
Mar  3 12:45:40 node100 kernel: [  349.702526]  [<ffffffff802b754f>] 
deactivate_super+0x6f/0x90
Mar  3 12:45:40 node100 kernel: [  349.702588]  [<ffffffff802cdfab>] 
sys_umount+0x6b/0x2e0
Mar  3 12:45:40 node100 kernel: [  349.702652]  [<ffffffff802b91d7>] 
sys_newstat+0x27/0x50
Mar  3 12:45:40 node100 kernel: [  349.702714]  [<ffffffff803523c2>] 
__up_write+0x22/0x130
Mar  3 12:45:40 node100 kernel: [  349.702780]  [<ffffffff8020c39e>] 
system_call+0x7e/0x83
Mar  3 12:45:40 node100 kernel: [  349.702845] 
Mar  3 12:45:40 node100 kernel: [  349.702894] 
Mar  3 12:45:40 node100 kernel: [  349.702895] Code: 0f 0b eb fe 0f 0b eb fe 0f 
1f 40 00 55 48 89 fd 53 48 89 f3 
Mar  3 12:45:40 node100 kernel: [  349.703777] RIP  [<ffffffff802c8a84>] 
shrink_dcache_for_umount_subtree+0x274/0x280
Mar  3 12:45:40 node100 kernel: [  349.703885]  RSP <ffff81018d509df8>
Mar  3 12:45:40 node100 kernel: [  349.703942] ---[ end trace fe1c2e0b1491c96f 
]---


On Tue, 2009-03-03 at 12:21 -0600, Brian Koebbe wrote:
> On Tue, 2009-03-03 at 09:41 -0800, Sage Weil wrote:
> > 
> > (I'm still looking at the bug you sent yesterday, by the 
> > way.  The latest unstable _may_ fix it, though I'm not quite done
> > yet.)
> > 
> 
> yeah... still getting that bug.
> 
> > This is with teh latest unstable?  Can you do it again, but mount with
> > '-o 
> > debug_msgr=1,debug=100,debug_console', and send along the kern.log
> > output?
> 
> Will do...
> 
> 
> Brian Koebbe
> Scientific Computing Systems Manager
> Laboratory for Computational Genomics
> Center for Genome Sciences
> Washington University
> Campus Box 8510
> 4444 Forest Park
> St. Louis, MO  63108

Brian Koebbe
Scientific Computing Systems Manager
Laboratory for Computational Genomics
Center for Genome Sciences
Washington University
Campus Box 8510
4444 Forest Park
St. Louis, MO  63108


------------------------------------------------------------------------------
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H
_______________________________________________
Ceph-devel mailing list
Ceph-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ceph-devel

Reply via email to