On Aug 19, 2008 14:59 +0200, Philippe Weill wrote:
I put the journal on LVM over multipath for my OSTs
formating the OST was made with
mkfs.lustre -v --reformat --fsname=datafs --ost --mgsnode=10.0.0.1
--mgsnode=10.0.0.2 --failover=10.0.0.3 --mkfsoptions=-E stride=64 -E
Thanks Andreas.
Is this documented anywhere? We are facing inode shortage problems on
our filesystem and I would like to show this to my professors. The
reason why this is occurring is..
I checked the operations manual and Lustre inter galactic file
system but no luck of this issue.
TIA
On
Hello,
we're seing an LBUG on clients running with Lustre 1.6.5.1 (the servers are
still under 1.6.4.3). I tried finding this in bugzilla with no success. There
seems to be some data inconsistency, can somebody please tell me whether this
is rather on the server side (the data on disk is
On Tue, 2008-08-19 at 19:42 -0400, Christopher Walker wrote:
Hello,
Hi,
(In the logs above 10.242.40.14 = herologin1). Should I try the
echo 0 /proc/fs/lustre/llite/*/statahead_max
solution that fixed the statahead problem?
It's surely worth a try, in an attempt to try to isolate the
On Mittwoch 20 August 2008, Brian J. Murrell wrote:
On Wed, 2008-08-20 at 16:00 +0200, Erich Focht wrote:
Hello,
we're seing an LBUG on clients running with Lustre 1.6.5.1 (the servers are
still under 1.6.4.3). I tried finding this in bugzilla with no success.
Bug 16427.
Thanks
Hi there
I got the following background information from Juergen Kreuels at SGI
It turned out that a bad disk ( which did NOT report itself as being
bad ) killed the lustre leading to data corruption due to inode areas on
that disk.
It was finally decided to remake the whole FS and only during
Oh damn, I'm always afraid of silent data corruptions due to bad harddisks. We
also already had this issue, fortunately we found this disk before taking the
system into production.
Will lustre-2.0 use the ZFS checksum feature?
Thanks,
Bernd
On Wednesday 20 August 2008 19:08:34 Peter Jones
On Aug 20, 2008 06:45 -0400, Mag Gam wrote:
Is this documented anywhere? We are facing inode shortage problems on
our filesystem and I would like to show this to my professors. The
reason why this is occurring is..
Well, according to the lfs df -i output below there are lots of inodes
free.
On Thu, 2008-08-14 at 08:05 -0400, Mag Gam wrote:
What is the disadvantage of creating a MDS partition with smaller
inodes per block? Now, its 4k per inode what happens if we go to the
least blocks which is 1024k?
You risk running out of room in the inode for EAs, requiring that
another block
them up. You are now ready to use Lustre Debian style. If deploying this on
a cluster, just install your ready made .debs for the kernel and modules.
None of this was explained in the README in /usr/share/doc/Lustre so we had
to figure it out ourselves. Hopefully the Debian folks have
On Aug 20, 2008 17:40 -0500, Troy Benjegerdes wrote:
them up. You are now ready to use Lustre Debian style. If deploying this
on
a cluster, just install your ready made .debs for the kernel and modules.
None of this was explained in the README in /usr/share/doc/Lustre so we
had
Brian:
What is an EA?
Yes. We create 20k files per day. Multiple that by 360days per year
(for our research), thats about 7200 files per year
We have 11 years of data. 79200 files
OUr file size range from 5M to 70M (average)
I know its crazy but a professor or studeny will need any
Hi, all,
I got a problem when I testing lustre-1.6.5.1 on CentOS-5.2.
I have four machines(PCs), they are MGS co-located with MDT, OSS-1, OSS-2 and
CLT.
OSS-1 have two disks which are formatted as ost01(40GB) and ost02(15GB).
OSS-2 have two disks which are formatted as ost03(23GB) and
If I understand right when you use 'setstripe -c -1' lustre will try
to evenly spread the data of a file over all OST's.
Because one of yours gets full, the file can nolonger be added to.
Lustre does not fall back to using fewer stripes as most users say
'use more stripes' for a reason.
14 matches
Mail list logo