Re: [Lustre-discuss] Rule of thumb for setting up lustre resources...

2008-06-17 Thread Brian J. Murrell
On Tue, 2008-06-17 at 09:40 -0400, Mark True wrote:
 Thanks so much for the prompt response, I do have a couple of
 questions for clarification:
 
 Does the hardware makeup of the OSS affect the speed of the OSTs?

Of course.  An OST is only going to go as fast as the hardware that it's
made up of.  If you put a slow disk in an OSS, the OST is going to be
slow.

 If so, what is likely to be the bottleneck in an OSS.

There is no one right answer to that.  You have to get together with
your hardware vendor and explain your use scenario (i.e. for an OSS) and
have them spec out some hardware that meets the use-case.  If you will
be spec'ing your own hardware then you need to grab the technical
specifications for all of the hardware you are proposing using and
understand their performance aspects.  If you don't feel confident in
being able to do the latter, then I would suggest you do the former.

In general, an OSS is I/O bound.  You need to provide enough I/O
capacity between the disk and network through which the data will
travel.

 Say we have an OSS with 3 OSTs attatched, is that different than
 having
 three OSSs with 1 OST apiece?

That depends on whether that OSS with the 3 OSTs attached has the I/O
capacity to do full-out I/O to all three disks.  As I've said before,
this is basically an exercise in understanding the capacity of your
entire I/O path from OST to client and sizing to meet that capacity.

 Also, does the OSS have as much of a performance impact as the speed
 of the
 OST.

The OSS hosts the OST, so you can't really compare the performance
impact of one vs. the other.

 What is the recommended max number of OSTs per OSS?

As you have probably gathered, that is *completely* dependent on the
hardware you are using for OSSes and OSTs and there is no one answer
that fits all hardware.  You just want to make sure you don't create
bottlenecks, again by understanding the capacity of the various paths
from OST (disk) to client.

 If I am able to determine the max capabilities of an OST/OSS is it
 safe to
 assume that the increase in performance scales linearly as I increase
 the
 number of OSS/OSTs?

Yes, raw bandwidth will grow pretty linearly.  It is up to your
applications and use-cases to take advantage of that though.

b.




signature.asc
Description: This is a digitally signed message part
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] Gluster then DRBD now Lustre?

2008-06-17 Thread Brian J. Murrell
On Tue, 2008-06-17 at 08:51 -0500, [EMAIL PROTECTED] wrote:
 
 So any node does you lose the data?

I will parse that as so if you lose any node, you lose data? and the
answer to that is yes.  If you lose an OST, you lose data.  If you lose
the MDT you lose the entire filesystem.  Lustre assumes that the storage
you assign to it to manage is reliable.  That is, when you create an OST
or MDT, we very strongly suggest you create it out of some sort of
reliable storage like RAID 1/5/6 for an OST and RAID 1 for an MDT.

 The plan is to add more and more servers, so the hope was to be able to 
 start with 2 and get some redundancy and then grow.

Lustre will give you that ability.

b.



signature.asc
Description: This is a digitally signed message part
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] 2.6.22

2008-06-17 Thread Bernd Schubert
Hello Tamás,

On Tuesday 17 June 2008 16:41:55 Papp Tamás wrote:
 Dear All,

 Is there any reason to not user kernels with version 2.6.22.x above
 2.6.22.14 or should it work?


 I've just compiled it with 2.6.22.19 and I can mount the cluster, but
 after the first ls command it gives me an oops, and stuck on this stage.

I didn't have the time to test lustre-1.6.5, but it would be quite helpful if 
you could paste the oops.


Thanks,
Bernd


-- 
Bernd Schubert
Q-Leap Networks GmbH
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] MGS disk size and activity

2008-06-17 Thread Brian J. Murrell
On Tue, 2008-06-17 at 11:09 -0400, Brock Palen wrote:
 A post a few days back recommended that the MGS be placed on its own  
 disk for all but toy setups I think was the comment.

Yeah.

 How much space does the MGS require?

Very little.  I think we have discussed here that 100MB should be lots
for now and the future.  Perhaps somebody who knows more about what
changes could be happening to configuration logs might know better.

 Is it based on the number of  
 hosts?  Or just the number of lustre file systems (we expect only 1  
 but maybe more in future).

Well, it would be both, but the amount of data per host (where a host is
an OSS, MDS) is relatively small.

 Last what's the IO requirements of the MGS?

Again, small.  It's used when a new OST or MDT is added to record their
addition and when nodes join the filesystem or when a configuration
change with a --writeconf is done.

 I don't think it would  
 be much,  such that could it share spindles with the journal for the  
 MDS file system?

Hrm.  Given it's relatively low use, I'd think that would be fine.

b.



signature.asc
Description: This is a digitally signed message part
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] 2.6.22

2008-06-17 Thread Papp Tamás
Papp Tamás wrote:
 Bernd Schubert wrote:
   
 Hello Tamás,

 On Tuesday 17 June 2008 16:41:55 Papp Tamás wrote:
   
 
 Dear All,

 Is there any reason to not user kernels with version 2.6.22.x above
 2.6.22.14 or should it work?


 I've just compiled it with 2.6.22.19 and I can mount the cluster, but
 after the first ls command it gives me an oops, and stuck on this stage.
 
   
 I didn't have the time to test lustre-1.6.5, but it would be quite helpful 
 if 
 you could paste the oops.
   
 

 helo!

 Sure..
   

I'm sorry for the last mail posting, the same happens if I compile it 
with these options:

./configure --with-linux=/usr/src/linux-2.6.22.14/ --disable-doc 
--disable-utils --disable-tests --disable-server --prefix=/usr/src/lustre



So I guess, there is something wrong.

Thank you very much,

tamas

___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] 2.6.22

2008-06-17 Thread Bernd Schubert
On Tue, Jun 17, 2008 at 05:50:54PM +0200, Papp Tamás wrote:
 Papp Tamás wrote:
  Bernd Schubert wrote:

  Hello Tamás,
 
  On Tuesday 17 June 2008 16:41:55 Papp Tamás wrote:

  
  Dear All,
 
  Is there any reason to not user kernels with version 2.6.22.x above
  2.6.22.14 or should it work?
 
 
  I've just compiled it with 2.6.22.19 and I can mount the cluster, but
  after the first ls command it gives me an oops, and stuck on this stage.
  

  I didn't have the time to test lustre-1.6.5, but it would be quite helpful 
  if 
  you could paste the oops.

  
 
  helo!
 
  Sure..

 
 I'm sorry for the last mail posting, the same happens if I compile it 
 with these options:
 
 ./configure --with-linux=/usr/src/linux-2.6.22.14/ --disable-doc 
 --disable-utils --disable-tests --disable-server --prefix=/usr/src/lustre
 
 
 
 So I guess, there is something wrong.

Yeah, this is what I immediately thought when I saw your trace. The kernel
developer somehow manage to change the interface to the cache functions
on each kernel version (though not during the last digit subversions)
The trace lets me thing these functions have been called with the wrong
arguments. However, lustre already has wrapper functions for this and 
I guess the configure script did something wrong this time.
Unless the lustre developers step in, I will try to find some time
tomorrow or on Thursday to check what's wrong.

Cheers,
Bernd
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] MGS disk size and activity

2008-06-17 Thread Klaus Steden


 I don't think it would
 be much,  such that could it share spindles with the journal for the
 MDS file system?
 
 Hrm.  Given it's relatively low use, I'd think that would be fine.
 
I have a question ... if the MGS is used so infrequently relative to the use
of the MDS, why is it (is it?) problematic to locate it on the same volume
as the MDT?

thanks,
Klaus

___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] MGS disk size and activity

2008-06-17 Thread Brian J. Murrell
On Tue, 2008-06-17 at 12:40 -0700, Klaus Steden wrote:
  
 I have a question ... if the MGS is used so infrequently relative to the use
 of the MDS, why is it (is it?) problematic to locate it on the same volume
 as the MDT?

Our mountconf expert engineers probably know all of the reasons but one
is that if you have multiple filesystems and you need to reformat the
MDT that is shared with the MGS for some reason, you end up losing
configuration logs for all of your filesystems.

b.



signature.asc
Description: This is a digitally signed message part
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] 1.6.5 and OFED?

2008-06-17 Thread Kilian CAVALOTTI
On Monday 16 June 2008 04:35:41 am Greenseid, Joseph M. wrote:
 Is there any word on when the IB packages might be making it up to
 the download site for 1.6.5?  As had been previously noted, they were
 missing when the rest of 1.6.5 was pushed.

I'd like to support this request, since this is part of the 1.6.5 
Changelog:

Severity: enhancement 
Bugzilla: 15316 
Description: build kernel-ib packages for OFED 1.3 in our release cycle


Also, the download site lists lustre-client and lustre-client-modules 
RPMs for RHEL5 and SLES10, but not for RHEL4 nor SLES9. Is that by 
design, or are they missing too?

Thanks,
-- 
Kilian
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] add space of MDS problem?

2008-06-17 Thread Johnlya


On Mon, 2008-06-16 at 21:38, Brian J. Murrell
[EMAIL PROTECTED] wrote:
 On Fri, 2008-06-13 at 04:35 -0700, Johnlya wrote:
  I tested it and it can work. But I don't do it by its method. I want
  to add space like adding OST.

 That is currently not possible.  The only method we support for MDT
 expansion is the backup/recreate MDT/restore process.

 Cheers,
 b.

  signature.asc
 1Kdownload

 ___
 Lustre-discuss mailing list
 [EMAIL PROTECTED]://lists.lustre.org/mailman/listinfo/lustre-discuss

Thanks a lot!
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] maximum MDT inode count

2008-06-17 Thread Daniel Leaberry
We have ~1.4 billion inodes and we're using about 50 million. We use 
lots of small files so I took our highest utilization ever and doubled 
it. Better safe than sorry.

We did format with an inode every 1024 bytes because we use no striping.

-- 
Daniel Leaberry
Senior Systems Administrator
iArchives Inc.

Andreas Dilger wrote:
 For future filesystem compatibility, we are wondering if there are any
 Lustre MDT filesystems in existence that have 2B or more total inodes?
 
 This is fairly unlikely, because it would require an MDT filesystem
 that is  8TB in size (which isn't even supported yet) and/or has been
 formatted with specific options to increase the total number of inodes.
 
 This can be checked with dumpe2fs -h /dev/{mdtdev} | grep 'Inode count'
 on the MDT.  
 
 
 If there are issues with privacy, please email me directly.  Otherwise,
 I think people would be interested to know what the larger MDT sizes
 are (even if they aren't as high as 2B inodes).
 
 Cheers, Andreas
 --
 Andreas Dilger
 Sr. Staff Engineer, Lustre Group
 Sun Microsystems of Canada, Inc.
 
 ___
 Lustre-discuss mailing list
 Lustre-discuss@lists.lustre.org
 http://lists.lustre.org/mailman/listinfo/lustre-discuss
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss