Rob,

Today, I got a different error, which is a more serious Segmentation 
fault from kernel. I attached the dmesg error in this email. Hope this 
helps.

% ls -l /pvfs2/wkliao
ls: cannot access /pvfs2/wkliao/file.dat: No such file or directory
Segmentation fault
[EMAIL PROTECTED] scratch]# 
Message from [EMAIL PROTECTED] at Jan 24 12:47:54 ...
 kernel: ------------[ cut here ]------------

Message from [EMAIL PROTECTED] at Jan 24 12:47:54 ...
 kernel: invalid opcode: 0000 [2] SMP 


Wei-keng


On Wed, 23 Jan 2008, Robert Latham wrote:

> On Wed, Jan 23, 2008 at 02:12:31PM -0600, Wei-keng Liao wrote:
> > 
> > I checked the output of dmesg on both cleint and server sides and did not 
> > find any that is related to 32 bit - 64 bit. But I do not know the exact 
> > message should appear in dmesg to indicate that. Attached is the output of 
> > dmesg from a client.
> > 
> > I also found that if I umount /pvfs2 and mount it again, the error is 
> > gone and ls shows the file correctly. But if I deleted the file and 
> > re-run the MPI code again to create a file, the same error would occur.
> 
> You know, I wonder if this is related to the fixes Phil just checked
> in.  Let me check something...
> 
> ==rob
> 
> -- 
> Rob Latham
> Mathematics and Computer Science Division    A215 0178 EA2D B059 8CDF
> Argonne National Lab, IL USA                 B29D F333 664A 4280 315B
> 
------------[ cut here ]------------
kernel BUG at fs/dcache.c:986!
invalid opcode: 0000 [1] SMP 
CPU 0 
Modules linked in: pvfs2(U) autofs4 nfs lockd nfs_acl sunrpc ipv6 
cpufreq_ondemand acpi_cpufreq loop dm_multipath sr_mod tg3 cdrom iTCO_wdt 
iTCO_vendor_support dcdbas button i2c_i801 i2c_core pcspkr sg joydev 
dm_snapshot dm_zero dm_mirror dm_mod ata_piix ata_generic libata sd_mod 
scsi_mod ext3 jbd mbcache uhci_hcd ohci_hcd ehci_hcd
Pid: 24363, comm: updatedb Not tainted 2.6.23.9-85.fc8 #1
RIP: 0010:[<ffffffff810abe06>]  [<ffffffff810abe06>] d_instantiate+0x14/0x7c
RSP: 0018:ffff81004b70fbc8  EFLAGS: 00010287
RAX: 0000000000008000 RBX: ffff81004d879128 RCX: 0000000000000000
RDX: 0000000000000003 RSI: ffff81004d879128 RDI: ffff810057cfcc30
RBP: ffff810057cfcc30 R08: 6000000000000000 R09: 0000000000000000
R10: 0000000000000400 R11: ffffffff8101bdf7 R12: ffff810057cfcca0
R13: ffff81004d879128 R14: ffff81005797da98 R15: ffff81004b70fcb8
FS:  00002aaaaaaca6f0(0000) GS:ffffffff813bd000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 00002aaaaaaae000 CR3: 000000004bff6000 CR4: 00000000000006e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process updatedb (pid: 24363, threadinfo ffff81004b70e000, task 
ffff81004da24820)
Stack:  ffff81004d879128 ffff810057cfcc30 0000000000000000 ffffffff810ad212
 ffff810072716000 ffff8100755f9bc0 ffff810057cfcc30 ffffffff8822dd7d
 ffff810057cfcc30 ffff81004b70fe48 ffff8100755f9bc0 ffff810077828a00
Call Trace:
 [<ffffffff810ad212>] d_splice_alias+0xfd/0x10d
 [<ffffffff8822dd7d>] :pvfs2:pvfs2_d_revalidate+0x1d2/0x29d
 [<ffffffff810a2a93>] do_lookup+0x157/0x1ae
 [<ffffffff810a4d2f>] __link_path_walk+0x8eb/0xd9c
 [<ffffffff810a5238>] link_path_walk+0x58/0xe0
 [<ffffffff810a55a8>] do_path_lookup+0x1a3/0x21d
 [<ffffffff810a409f>] getname+0x14c/0x1b2
 [<ffffffff810a5df7>] __user_walk_fd+0x37/0x4c
 [<ffffffff8109e731>] vfs_lstat_fd+0x18/0x47
 [<ffffffff8109e923>] sys_newlstat+0x19/0x31
 [<ffffffff8100bce1>] tracesys+0x71/0xda
 [<ffffffff8100bd45>] tracesys+0xd5/0xda


Code: 0f 0b eb fe 48 c7 c7 80 28 41 81 e8 cc 10 1b 00 48 85 db 74 
RIP  [<ffffffff810abe06>] d_instantiate+0x14/0x7c
 RSP <ffff81004b70fbc8>
------------[ cut here ]------------
kernel BUG at fs/dcache.c:986!
invalid opcode: 0000 [2] SMP 
CPU 0 
Modules linked in: pvfs2(U) autofs4 nfs lockd nfs_acl sunrpc ipv6 
cpufreq_ondemand acpi_cpufreq loop dm_multipath sr_mod tg3 cdrom iTCO_wdt 
iTCO_vendor_support dcdbas button i2c_i801 i2c_core pcspkr sg joydev 
dm_snapshot dm_zero dm_mirror dm_mod ata_piix ata_generic libata sd_mod 
scsi_mod ext3 jbd mbcache uhci_hcd ohci_hcd ehci_hcd
Pid: 28409, comm: ls Tainted: G      D 2.6.23.9-85.fc8 #1
RIP: 0010:[<ffffffff810abe06>]  [<ffffffff810abe06>] d_instantiate+0x14/0x7c
RSP: 0018:ffff81003e691bc8  EFLAGS: 00010287
RAX: 0000000000008000 RBX: ffff8100755f9128 RCX: 0000000000000000
RDX: 0000000000000003 RSI: ffff8100755f9128 RDI: ffff810057cfcc30
RBP: ffff810057cfcc30 R08: 6000000000000000 R09: 0000000000000000
R10: ffffffff81370660 R11: 00008b0a93741d6d R12: ffff810057cfcca0
R13: ffff8100755f9128 R14: ffff81005797da98 R15: ffff81003e691cb8
FS:  00002aaaaaacc930(0000) GS:ffffffff813bd000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 00000032a5f24480 CR3: 000000004fa9a000 CR4: 00000000000006e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process ls (pid: 28409, threadinfo ffff81003e690000, task ffff81007a489040)
Stack:  ffff8100755f9128 ffff810057cfcc30 0000000000000000 ffffffff810ad212
 ffff81005c12a000 ffff8100755f9bc0 ffff810057cfcc30 ffffffff8822dd7d
 ffff810057cfcc30 ffff81003e691e48 ffff8100755f9bc0 ffff81007ef3b700
Call Trace:
 [<ffffffff810ad212>] d_splice_alias+0xfd/0x10d
 [<ffffffff8822dd7d>] :pvfs2:pvfs2_d_revalidate+0x1d2/0x29d
 [<ffffffff810a2a93>] do_lookup+0x157/0x1ae
 [<ffffffff810a4d2f>] __link_path_walk+0x8eb/0xd9c
 [<ffffffff810a5238>] link_path_walk+0x58/0xe0
 [<ffffffff8102f7c5>] __wake_up+0x38/0x4f
 [<ffffffff81170b99>] tty_ldisc_deref+0x62/0x75
 [<ffffffff810a55a8>] do_path_lookup+0x1a3/0x21d
 [<ffffffff810a409f>] getname+0x14c/0x1b2
 [<ffffffff810a5df7>] __user_walk_fd+0x37/0x4c
 [<ffffffff8109e731>] vfs_lstat_fd+0x18/0x47
 [<ffffffff8102f7c5>] __wake_up+0x38/0x4f
 [<ffffffff81170b99>] tty_ldisc_deref+0x62/0x75
 [<ffffffff8109e923>] sys_newlstat+0x19/0x31
 [<ffffffff8100bce1>] tracesys+0x71/0xda
 [<ffffffff8100bd45>] tracesys+0xd5/0xda


Code: 0f 0b eb fe 48 c7 c7 80 28 41 81 e8 cc 10 1b 00 48 85 db 74 
RIP  [<ffffffff810abe06>] d_instantiate+0x14/0x7c
 RSP <ffff81003e691bc8>

_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to