Hi,
We have 50 nodes each of which has a lustre partition mounted on
/scratch. Prior to using lustre we used NFS so the head node actually
passes lustre traffic through to the lustre servers.
We run a particular job on these nodes where they are each running a job
which populates a directory
On Sep 10, 2009 20:50 -0400, Adam wrote:
Unmounting worked, but remounting resulted in a LBUG (edited):
X:(tracefile.c:450:libcfs_assertion_failed()) LBUG
X:(handler.c:2049:mds_setup()) ASSERTION(!
lvfs_check_rdonly(lvfs_sbdev(mnt-mnt_sb))) failed
Hello!
On Sep 11, 2009, at 1:17 AM, Muruga Prabu M wrote:
I have a small java application that uploads images into the lustre
filesystem. When I try to upload images from the application, the
MDS server crashes and kernel panic happens. I have attached the
ouptut of the 'dmesg', and the
Is the read cache corruption actually causing on-disk corruption? Or just
in-memory corruption? I'm assuming the write cache corruption would end up
causing the file to become corrupt on disk, but if a node crashes during a
write then I'm personally not all that bothered by it.
On a side note,
Hello!
On Sep 11, 2009, at 9:33 AM, Aaron Knister wrote:
Is the read cache corruption actually causing on-disk corruption? Or
just in-memory corruption? I'm assuming the write cache corruption
would end up causing the file to become corrupt on disk, but if a
node crashes during a write
You might also want to look at the 20533 ticket to see if it is related.
There are kernel patches for RHEL 5.3 which improves the Lustre
performance on RAID, but there are no plans for CentOS kernel.
Rafael David Tinoco wrote:
I think Ive discovered the problem.
I was using multipathd in my