Hello, We are using centos 5.4 with a piece of proprietary software that does i/o. This works fine with a shared nfs filesystem but kernel panics with lustre. I see SELinux mentioned, but I have that disabled in centos. We are using Centos 5.4 with the official 1.8.2 rpm packages.
To debug this, I have loaded netconsole on the client, and nc on another box. Here is the output from the machine receiving netconsole messages: =========================== r...@ev-hendelmanlx01:/tmp# nc -k -l -u 10.1.13.163 515 | tee /tmp/devlustre12.log SysRq : Changing Loglevel Loglevel set to 9 Lustre: mgc10.1.12...@tcp: Reactivating import Lustre: Client credit01-client has started SysRq : Changing Loglevel Loglevel set to 9 SysRq : Changing Loglevel Loglevel set to 9 Installing knfsd (copyright (C) 1996 [email protected]). SELinux: initialized (dev nfsd, type nfsd), uses genfs_contexts NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory NFSD: starting 90-second grace period FS-Cache: Loaded SELinux: initialized (dev 0:1a, type nfs4), uses genfs_contexts ----------- [cut here ] --------- [please bite here ] --------- Kernel BUG at mm/mmap.c:468 invalid opcode: 0000 [1] SMP last sysfs file: /devices/pci0000:00/0000:00:08.0/0000:02:00.1/irq CPU 8 Modules linked in: nfs(U) fscache(U) nfsd(U) exportfs(U) nfs_acl(U) auth_rpcgss(U) mgc(U) lustre(U) lov(U) mdc(U) lquota(U) osc(U) ksocklnd(U) ptlrpc(U) obdclass(U) lnet(U) lvfs(U) libcfs(U) netconsole(U) autofs4(U) hidp(U) rfcomm(U) l2cap(U) bluetooth(U) lockd(U) sunrpc(U) ip_conntrack_netbios_ns(U) ipt_REJECT(U) xt_state(U) ip_conntrack(U) nfnetlink(U) iptable_filter(U) ip_tables(U) ip6t_REJECT(U) xt_tcpudp(U) ip6table_filter(U) ip6_tables(U) x_tables(U) ipv6(U) xfrm_nalgo(U) crypto_api(U) dm_multipath(U) scsi_dh(U) video(U) hwmon(U) backlight(U) sbs(U) i2c_ec(U) i2c_core(U) button(U) battery(U) asus_acpi(U) acpi_memhotplug(U) ac(U) parport_pc(U) lp(U) parport(U) shpchp(U) hpilo(U) bnx2(U) serio_raw(U) pcspkr(U) dm_raid45(U) dm_message(U) dm_region_hash(U) dm_mem_cache(U) dm_snapshot(U) dm_zero(U) dm_mirror(U) dm_log(U) dm_mod(U) usb_storage(U) lpfc(U) scsi_transport_fc(U) cciss(U) sd_mod(U) scsi_mod(U) ext3(U) jbd(U) uhci_hcd(U) ohci_hcd(U) ehci_hcd(U) Pid: 5384, comm: lustreCrusher Tainted: G 2.6.18-164.11.1.el5_lustre.1.8.2 #1 RIP: 0010:[<ffffffff800151a3>] [<ffffffff800151a3>] vma_adjust+0x38c/0x459 RSP: 0018:ffff81091ccade38 EFLAGS: 00010287 RAX: ffff81049cb1e8c8 RBX: ffff81091a466c38 RCX: ffff81049d8fb4d8 RDX: 00002aaac9400000 RSI: 00002aaac8a00000 RDI: ffff81049d8fb4e0 RBP: ffff81049ff2a8c8 R08: ffff81091ccade88 R09: ffff81049cb1e8f8 R10: ffff81049d8fb4d8 R11: ffff81049cb1e8c8 R12: 0000000000000000 R13: ffff81049e7cce48 R14: ffff81049ff2ddf8 R15: 0000000000000000 FS: 0000000048bb9940(0063) GS:ffff81049fc1cc40(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00002aaac9201000 CR3: 000000091c738000 CR4: 00000000000006e0 Process lustreCrusher (pid: 5384, threadinfo ffff81091ccac000, task ffff81091f68b820) Stack: 00000002aaac9400 00002aaac9e00000 00002aaac9400000 ffff81091c7bc100 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 ffff81091cf909c0 ffff81049cb1e8f8 ffff81049cb1e900 Call Trace: [<ffffffff8001c9b2>] split_vma+0x12a/0x13c [<ffffffff80011cb6>] do_munmap+0x158/0x27a [<ffffffff800645ab>] __down_write_nested+0x12/0x92 [<ffffffff800160d5>] sys_munmap+0x40/0x59 [<ffffffff8005d28d>] tracesys+0xd5/0xe0 Code: 0f 0b 68 b8 bf 2a 80 c2 d4 01 48 8b 4c 24 58 48 8b 54 24 60 RIP [<ffffffff800151a3>] vma_adjust+0x38c/0x459 RSP <ffff81091ccade38> <0>Kernel panic - not syncing: Fatal exception The information contained in this message and its attachments is intended only for the private and confidential use of the intended recipient(s). If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. Any unauthorized copying, disclosure or distribution of the material in this e- mail is strictly prohibited. _______________________________________________ Lustre-discuss mailing list [email protected] http://lists.lustre.org/mailman/listinfo/lustre-discuss
