So if I have arrays with 15 drives in them should I just configure two smaller arrays? Also if I make a giant 30 terabyte filesystem of underlying say 6TB disk arrays and one of my disk arrays bites the dust what happens to the rest of the filesystem and how easy is it to recover from this situation?

-Aaron

On Oct 10, 2007, at 11:48 AM, Andreas Dilger wrote:

On Oct 10, 2007  09:40 -0600, Lundgren, Andrew wrote:
As RH 5.1 will support 16TB ext3 partitions, will lustre inherit that
functionality?

We haven't looked at this yet. The ldiskfs code is ext3 + patches, so there is some chance that it will work (more likely on 64-bit platforms), but we haven't audited the ldiskfs patches to check if they are 32-bit clean.

-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
Andreas Dilger
Sent: Wednesday, October 10, 2007 9:26 AM
To: Aaron Knister
Cc: [email protected]
Subject: Re: [Lustre-discuss] Hardware Question

On Oct 06, 2007  10:28 -0400, Aaron Knister wrote:
Oh, right I forgot about that. Well...if i had an 8tb lun
and split it
into 2 volume groups using LVM do you think the performance
would be
worse than making 2 raids at the hardware level?

Well, it won't be doing the disks any favours, since you'll
now have contention between the OSTs, and the kernel will be
doing a poor job with the IO elevator decisions.  I would
suggest making 2 smaller RAID LUNs instead.

In the end it is up to you to decide if the IO performance is
acceptable.
You can do some testing using lustre-iokit to see what the
component device performance is.

On Oct 5, 2007, at 6:18 PM, Andreas Dilger wrote:

On Oct 05, 2007  13:14 -0400, Aaron Knister wrote:
Make that 6x 9.7TB luns.

Lustre (== ext3) doesn't support >= 8TB LUNs.

Cheers, Andreas
--
Andreas Dilger
Principal Software Engineer
Cluster File Systems, Inc.


Aaron Knister
Associate Systems Administrator/Web Designer
Center for Research on Environment and Water

(301) 595-7001
[EMAIL PROTECTED]



_______________________________________________
Lustre-discuss mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-discuss

Reply via email to