Hi-

I'm running openafs 1.4.0 rc3 on Linux 2.6 (specifically 2.6.13-gentoo-r2 with inotify _disabled_).

I frequently get the following crash. I can trigger it most of the time by running 'ls /afs/ir' (which is a symlink to /afs/ir.stanford.edu) after not using afs for some time.

After this, accesses to afs get stuck in the D state:

ls D ffff810019d0b440 0 3410 3361 3413 (NOTLB)
ffff810011eb7e38 0000000000000082 00000000005206a8 ffff81000db23d68
      ffff810008856e50 ffff8100088560b0 ffff810008856e50 ffff8100088562c8
      0000000000000000 ffffffff881992b0
Call Trace:<ffffffff881992b0>{:libafs:afs_linux_getattr+288} <ffffffff80389db6>{__down+198} <ffffffff8012d5a0>{default_wake_function+0} <ffffffff8038b9e4>{__down_failed+53} <ffffffff8018a250>{filldir+0} <ffffffff8018a5e9>{.text.lock.readdir+5}
      <ffffffff8018a3c2>{sys_getdents+130} <ffffffff8010f389>{error_exit+0}
      <ffffffff8010ea96>{system_call+126}

Any ideas?  Any tests I can run to help debug this?

Thanks,
Andy
Please CC me in replies as I am not subscribed?

afs: Lost contact with file server 137.226.244.82 in cell i1.informatik.rwth-aachen.de (multi-homed address; other same-host interfaces maybe up) afs: Lost contact with file server 137.226.244.82 in cell i1.informatik.rwth-aachen.de (multi-homed address; other same-host interfaces maybe up)
Can't open inode 677159
----------- [cut here ] --------- [please bite here ] ---------
Kernel BUG at "/var/tmp/portage/openafs-kernel-1.4.0_rc3/work/op:131
invalid operand: 0000 [1] PREEMPT
CPU 0
Modules linked in: ipt_conntrack iptable_nat ipt_REJECT ipt_state ip_conntrack ipt_multiport iptable_filter ip_tables libafs snd_pcm_oss snd_mixer_oss snd_seq_dummy snd_seq_oss snd_seq_midi_event snd_seq snd_cs4281 snd_opl3_lib snd_hwdep snd_via82xx snd_ac97_codec snd_pcm snd_timer snd_page_alloc snd_mpu401_uart snd_rawmidi snd_seq_device snd soundcore xfs raid5 xor raid0
Pid: 1439, comm: smbd Tainted: P      2.6.13-gentoo-r2
RIP: 0010:[<ffffffff88189f39>] <ffffffff88189f39>{:libafs:osi_Panic+25}
RSP: 0018:ffff810004f2da48  EFLAGS: 00010292
RAX: 000000000000001b RBX: fffffffffffffffb RCX: ffffffff804cfb50
RDX: ffff81000e532840 RSI: ffffffff804cf3f0 RDI: ffff81001edeef10
RBP: ffff81001f086c00 R08: 000000003b9aca00 R09: ffff81001f266ec0
R10: 00000000ffffffff R11: 0000000000000000 R12: ffff810019cce000
R13: ffffffff881c2d40 R14: 00000000000a5527 R15: 00000000000a5527
FS:  00002aaaaaadd0e0(0000) GS:ffffffff80502800(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000466330 CR3: 00000000066ff000 CR4: 00000000000006e0
Process smbd (pid: 1439, threadinfo ffff810004f2c000, task ffff81000e532840)
Stack: 00000000000a5527 ffffffff88195088 ffffc20000786d58 ffff8100137754e8
      ffffc200007864b8 ffff81001de7e000 ffff810013775280 0000000000000007
      ffff810004f2db88 ffffffff88173562
Call Trace:<ffffffff88195088>{:libafs:osi_UFSOpen+440} <ffffffff88173562>{:libafs:afs_UFSHandleLink+274} <ffffffff8816db2d>{:libafs:afs_lookup+3325} <ffffffff8814a371>{:libafs:afs_TraverseCells+81} <ffffffff8816ad06>{:libafs:EvalMountPoint+822} <ffffffff88199aef>{:libafs:afs_linux_lookup+191} <ffffffff80191333>{iput+83} <ffffffff88161f50>{:libafs:afs_PutVCache+160} <ffffffff8816b24c>{:libafs:afs_PutFakeStat+60} <ffffffff8018464e>{do_lookup+222} <ffffffff80185434>{__link_path_walk+2852} <ffffffff801859ea>{link_path_walk+186} <ffffffff80185cce>{path_lookup+494} <ffffffff80185fce>{__user_walk+62}
      <ffffffff8017f799>{vfs_stat+41} <ffffffff8017fbef>{sys_newstat+31}
      <ffffffff8010f389>{error_exit+0} <ffffffff8010ea96>{system_call+126}


Code: 0f 0b a3 f0 d2 1a 88 ff ff ff ff c2 83 00 48 83 c4 08 c3 66
RIP <ffffffff88189f39>{:libafs:osi_Panic+25} RSP <ffff810004f2da48>

_________________________________________________________________
Don’t just search. Find. Check out the new MSN Search! http://search.msn.click-url.com/go/onm00200636ave/direct/01/

_______________________________________________
OpenAFS-devel mailing list
[email protected]
https://lists.openafs.org/mailman/listinfo/openafs-devel

Reply via email to