Re: [Lustre-discuss] external journal and udev problem

2008-08-20 Thread Andreas Dilger
On Aug 19, 2008 14:59 +0200, Philippe Weill wrote: I put the journal on LVM over multipath for my OSTs formating the OST was made with mkfs.lustre -v --reformat --fsname=datafs --ost --mgsnode=10.0.0.1 --mgsnode=10.0.0.2 --failover=10.0.0.3 --mkfsoptions=-E stride=64 -E

Re: [Lustre-discuss] Understanding lfs output

2008-08-20 Thread Mag Gam
Thanks Andreas. Is this documented anywhere? We are facing inode shortage problems on our filesystem and I would like to show this to my professors. The reason why this is occurring is.. I checked the operations manual and Lustre inter galactic file system but no luck of this issue. TIA On

[Lustre-discuss] LBUG on client: Found existing inode ... in lock

2008-08-20 Thread Erich Focht
Hello, we're seing an LBUG on clients running with Lustre 1.6.5.1 (the servers are still under 1.6.4.3). I tried finding this in bugzilla with no success. There seems to be some data inconsistency, can somebody please tell me whether this is rather on the server side (the data on disk is

Re: [Lustre-discuss] evicted clients with 1.6.5.1

2008-08-20 Thread Brian J. Murrell
On Tue, 2008-08-19 at 19:42 -0400, Christopher Walker wrote: Hello, Hi, (In the logs above 10.242.40.14 = herologin1). Should I try the echo 0 /proc/fs/lustre/llite/*/statahead_max solution that fixed the statahead problem? It's surely worth a try, in an attempt to try to isolate the

Re: [Lustre-discuss] LBUG on client: Found existing inode ... in lock

2008-08-20 Thread Erich Focht
On Mittwoch 20 August 2008, Brian J. Murrell wrote: On Wed, 2008-08-20 at 16:00 +0200, Erich Focht wrote: Hello, we're seing an LBUG on clients running with Lustre 1.6.5.1 (the servers are still under 1.6.4.3). I tried finding this in bugzilla with no success. Bug 16427. Thanks

Re: [Lustre-discuss] HLRN lustre breakdown

2008-08-20 Thread Peter Jones
Hi there I got the following background information from Juergen Kreuels at SGI It turned out that a bad disk ( which did NOT report itself as being bad ) killed the lustre leading to data corruption due to inode areas on that disk. It was finally decided to remake the whole FS and only during

Re: [Lustre-discuss] HLRN lustre breakdown

2008-08-20 Thread Bernd Schubert
Oh damn, I'm always afraid of silent data corruptions due to bad harddisks. We also already had this issue, fortunately we found this disk before taking the system into production. Will lustre-2.0 use the ZFS checksum feature? Thanks, Bernd On Wednesday 20 August 2008 19:08:34 Peter Jones

Re: [Lustre-discuss] Understanding lfs output

2008-08-20 Thread Andreas Dilger
On Aug 20, 2008 06:45 -0400, Mag Gam wrote: Is this documented anywhere? We are facing inode shortage problems on our filesystem and I would like to show this to my professors. The reason why this is occurring is.. Well, according to the lfs df -i output below there are lots of inodes free.

Re: [Lustre-discuss] MDS question

2008-08-20 Thread Brian J. Murrell
On Thu, 2008-08-14 at 08:05 -0400, Mag Gam wrote: What is the disadvantage of creating a MDS partition with smaller inodes per block? Now, its 4k per inode what happens if we go to the least blocks which is 1024k? You risk running out of room in the inode for EAs, requiring that another block

Re: [Lustre-discuss] lustre + debian

2008-08-20 Thread Troy Benjegerdes
them up. You are now ready to use Lustre Debian style. If deploying this on a cluster, just install your ready made .debs for the kernel and modules. None of this was explained in the README in /usr/share/doc/Lustre so we had to figure it out ourselves. Hopefully the Debian folks have

Re: [Lustre-discuss] lustre + debian

2008-08-20 Thread Andreas Dilger
On Aug 20, 2008 17:40 -0500, Troy Benjegerdes wrote: them up. You are now ready to use Lustre Debian style. If deploying this on a cluster, just install your ready made .debs for the kernel and modules. None of this was explained in the README in /usr/share/doc/Lustre so we had

Re: [Lustre-discuss] MDS question

2008-08-20 Thread Mag Gam
Brian: What is an EA? Yes. We create 20k files per day. Multiple that by 360days per year (for our research), thats about 7200 files per year We have 11 years of data. 79200 files OUr file size range from 5M to 70M (average) I know its crazy but a professor or studeny will need any

[Lustre-discuss] It gives error no space left while lustre still have spaces left.

2008-08-20 Thread /* Chris */
Hi, all, I got a problem when I testing lustre-1.6.5.1 on CentOS-5.2. I have four machines(PCs), they are MGS co-located with MDT, OSS-1, OSS-2 and CLT. OSS-1 have two disks which are formatted as ost01(40GB) and ost02(15GB). OSS-2 have two disks which are formatted as ost03(23GB) and

Re: [Lustre-discuss] It gives error no space left while lustre still have spaces left.

2008-08-20 Thread Brock Palen
If I understand right when you use 'setstripe -c -1' lustre will try to evenly spread the data of a file over all OST's. Because one of yours gets full, the file can nolonger be added to. Lustre does not fall back to using fewer stripes as most users say 'use more stripes' for a reason.