Re: Raid over 48 disks ... for real now

2008-01-18 Thread michael

Quoting Norman Elton [EMAIL PROTECTED]:


I posed the question a few weeks ago about how to best accommodate
software RAID over an array of 48 disks (a Sun X4500 server, a.k.a.
Thumper). I appreciate all the suggestions.

Well, the hardware is here. It is indeed six Marvell 88SX6081 SATA
controllers, each with eight 1TB drives, for a total raw storage of
48TB. I must admit, it's quite impressive. And loud. More information
about the hardware is available online...

http://www.sun.com/servers/x64/x4500/arch-wp.pdf

It came loaded with Solaris, configured with ZFS. Things seemed to
work fine. I did not do any benchmarks, but I can revert to that
configuration if necessary.

Now I've loaded RHEL onto the box. For a first-shot, I've created one
RAID-5 array (+ 1 spare) on each of the controllers, then used LVM to
create a VolGroup across the arrays.

So now I'm trying to figure out what to do with this space. So far,
I've tested mke2fs on a 1TB and a 5TB LogVol.

I wish RHEL would support XFS/ZFS, but for now, I'm stuck with ext3.
Am I better off sticking with relatively small partitions (2-5 TB), or
should I crank up the block size and go for one big partition?


Impressive system. I'm curious to what the storage drives look like  
and how they attach to the server with that many disks?
Sounds like you have some time to play around before shoving it into  
production.

I wonder how long it would take to run an fsck on one large filesystem?

Cheers,
Mike
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Raid over 48 disks ... for real now

2008-01-18 Thread Greg Cormier
 I wonder how long it would take to run an fsck on one large filesystem?

:)

I would imagine you'd have time to order a new system, build it, and
restore the backups before the fsck was done!
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Raid over 48 disks ... for real now

2008-01-18 Thread Norman Elton
It is quite a box. There's a picture of the box with the cover removed
on Sun's website:

http://www.sun.com/images/k3/k3_sunfirex4500_4.jpg

From the X4500 homepage, there's a gallery of additional pictures. The
drives drop in from the top. Massive fans channel air in the small
gaps between the drives. It doesn't look like there's much room
between the disks, but a lot of cold air gets sucked in the front, and
a lot of hot air comes out the back. So it must be doing its job :).

I have not tried a fsck on it yet. I'll probably setup a lot of 2TB
partitions rather than a single large partition. Then write the
software to handle storing data across many partitions.

Norman

On 1/18/08, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 Quoting Norman Elton [EMAIL PROTECTED]:

  I posed the question a few weeks ago about how to best accommodate
  software RAID over an array of 48 disks (a Sun X4500 server, a.k.a.
  Thumper). I appreciate all the suggestions.
 
  Well, the hardware is here. It is indeed six Marvell 88SX6081 SATA
  controllers, each with eight 1TB drives, for a total raw storage of
  48TB. I must admit, it's quite impressive. And loud. More information
  about the hardware is available online...
 
  http://www.sun.com/servers/x64/x4500/arch-wp.pdf
 
  It came loaded with Solaris, configured with ZFS. Things seemed to
  work fine. I did not do any benchmarks, but I can revert to that
  configuration if necessary.
 
  Now I've loaded RHEL onto the box. For a first-shot, I've created one
  RAID-5 array (+ 1 spare) on each of the controllers, then used LVM to
  create a VolGroup across the arrays.
 
  So now I'm trying to figure out what to do with this space. So far,
  I've tested mke2fs on a 1TB and a 5TB LogVol.
 
  I wish RHEL would support XFS/ZFS, but for now, I'm stuck with ext3.
  Am I better off sticking with relatively small partitions (2-5 TB), or
  should I crank up the block size and go for one big partition?

 Impressive system. I'm curious to what the storage drives look like
 and how they attach to the server with that many disks?
 Sounds like you have some time to play around before shoving it into
 production.
 I wonder how long it would take to run an fsck on one large filesystem?

 Cheers,
 Mike
 -
 To unsubscribe from this list: send the line unsubscribe linux-raid in
 the body of a message to [EMAIL PROTECTED]
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Raid over 48 disks ... for real now

2008-01-18 Thread Jon Lewis

On Thu, 17 Jan 2008, Janek Kozicki wrote:


I wish RHEL would support XFS/ZFS, but for now, I'm stuck with ext3.


there is ext4 (or ext4dev) - it's an ext3 modified to support 1024 PB size
(1048576 TB). You could check if it's feasible. Personally I'd always
stick with ext2/ext3/ext4 since it is most widely used and thus has
the best recovery tools.


Something else to keep in mind...XFS fs repair tools require large amounts 
of memory.  If you were to create one or a few really huge fs's on this 
array, you might end up with fs's which can't be repaired because you 
don't have or even can't get a machine with enough RAM for the job...not 
to mention the amount of time it would take.


--
 Jon Lewis   |  I route
 Senior Network Engineer |  therefore you are
 Atlantic Net|
_ http://www.lewis.org/~jlewis/pgp for PGP public key_
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Raid over 48 disks ... for real now

2008-01-17 Thread Norman Elton
I posed the question a few weeks ago about how to best accommodate
software RAID over an array of 48 disks (a Sun X4500 server, a.k.a.
Thumper). I appreciate all the suggestions.

Well, the hardware is here. It is indeed six Marvell 88SX6081 SATA
controllers, each with eight 1TB drives, for a total raw storage of
48TB. I must admit, it's quite impressive. And loud. More information
about the hardware is available online...

http://www.sun.com/servers/x64/x4500/arch-wp.pdf

It came loaded with Solaris, configured with ZFS. Things seemed to
work fine. I did not do any benchmarks, but I can revert to that
configuration if necessary.

Now I've loaded RHEL onto the box. For a first-shot, I've created one
RAID-5 array (+ 1 spare) on each of the controllers, then used LVM to
create a VolGroup across the arrays.

So now I'm trying to figure out what to do with this space. So far,
I've tested mke2fs on a 1TB and a 5TB LogVol.

I wish RHEL would support XFS/ZFS, but for now, I'm stuck with ext3.
Am I better off sticking with relatively small partitions (2-5 TB), or
should I crank up the block size and go for one big partition?

Thoughts?

Norman Elton
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Raid over 48 disks ... for real now

2008-01-17 Thread Norman Elton
 Hi, sounds like a monster server. I am interested in how you will make
 the space useful to remote machines- iscsi? this is what I am
 researching currently.

Yes, it's a honker of a box. It will be collecting data from various
collector servers. The plan right now is to collect the file to
binary files using a daemon (already running on a smaller box), then
make the last 30/60/90/?? days available in a database that is
populated from these files. If we need to gather older data, then the
individual files must be consulted locally.

So, in production, I would probably setup the database partition on
it's own set of 6 disks, then dedicate the rest to handling/archiving
the raw binary files. These files are small (a few MB each), as they
get rotated every five minutes.

Hope this makes sense, and provides a little background info on what
we're trying to do.

Norman
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html