Re: transferring RAID-1 drives via sneakernet

2008-02-12 Thread Brendan Conoboy

Jeff Breidenbach wrote:

Does the new machine have a RAID array already?


Yes.. the new machine already has on RAID array.
After sneakernet it should have two RAID arrays. Is
there a gotcha?


It's not a RAID issue, but make sure you don't have any duplicate volume 
names.  According to Murphy's Law, if there are two / volumes, the wrong 
one will be chosen upon your next reboot.


--
Brendan Conoboy / Red Hat, Inc. / [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Raid over 48 disks

2007-12-18 Thread Brendan Conoboy

Norman Elton wrote:
We're investigating the possibility of running Linux (RHEL) on top of 
Sun's X4500 Thumper box:


http://www.sun.com/servers/x64/x4500/


Neat- 6 8 port SATA controllers!  It'll be worth checking to be sure 
each controller has equal bandwidth.  If some controllers are on slower 
buses than others you may want to consider that and balance the md 
device layout.


So... we're curious how Linux will handle such a beast. Has anyone run 
MD software RAID over so many disks? Then piled LVM/ext3 on top of that? 
Any suggestions?


There used to be a maximum number of devices allowed in a single md 
device.  Not sure if that is still the case.


With this many drives you would be well advised to make smaller raid 
devices then combine them into a larger md device (or via lvm, etc). 
Consider a write with a 48 device raid5- the system may need to read 
blocks from all those drives before a single write!


If it were my system, all ports were equally well connected, I'd create 
3 16 drive RAID5's with 1 hot spare, then combine them via raid 0 or 
lvm.  That's just my usage scenario, though (modest reliability, 
excellent read speed, modest write speed).


If you put ext3 on time, remember to use the stride option when making 
the filesystem.



Are we crazy to think this is even possible?


Crazy, possible, and fun!

--
Brendan Conoboy / Red Hat, Inc. / [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID 5 performance issue.

2007-10-05 Thread Brendan Conoboy

Andrew Clayton wrote:

If anyone has any idea's I'm all ears.


Hi Andrew,

Are you sure your drives are healthy?  Try benchmarking each drive 
individually and see if there is a dramatic performance difference 
between any of them.  One failing drive can slow down an entire array. 
 Only after you have determined that your drives are healthy when 
accessed individually are combined results particularly meaningful.  For 
a generic SATA 1 drive you should expect a sustained raw read or write 
in excess of 45 MB/s.  Check both read and write (this will destroy 
data) and make sure your cache is clear prior to the read test and after 
the write test.  If each drive is working at a reasonable rate 
individually, you're ready to move on.


The next question is: What happens when you access more than one device 
at the same time?  You should either get nearly full combined 
performance, max out CPU, or get throttled by bus bandwidth (An actual 
kernel bug could also come into play here, but I tend to doubt it).  Is 
the onboard SATA controller real SATA or just an ATA-SATA converter?  If 
the latter, you're going to have trouble getting faster performance than 
any one disk can give you at a time.  The output of 'lspci' should tell 
you if the onboard SATA controller is on its own bus or sharing space 
with some other device.  Pasting the output here would be useful.


Assuming you get good performance out of all 3 drives at the same time, 
it's time to create a RAID 5 md device with the three, make sure your 
parity is done building, then benchmark that.  It's going to be slower 
to write and a bit slower to read (especially if your CPU is maxed out), 
but that is normal.


Assuming you get good performance out of your md device, it's time to 
put your filesystem on the md device and benchmark that.  If you use 
ext3, remember to set the stride parameter per the raid howto.  I am 
unfamiliar with other fs/md interactions, so be sure to check.


If you're actually maxing out your bus bandwidth and the onboard sata 
controller is on a different bus than the pci sata controller, try 
balancing the drives between the two to get a larger combined pipe.


Good luck,

--
Brendan Conoboy / Red Hat, Inc. / [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: limits on raid

2007-06-18 Thread Brendan Conoboy

[EMAIL PROTECTED] wrote:
in my case it takes 2+ days to resync the array before I can do any 
performance testing with it. for some reason it's only doing the rebuild 
at ~5M/sec (even though I've increased the min and max rebuild speeds 
and a dd to the array seems to be ~44M/sec, even during the rebuild)


With performance like that, it sounds like you're saturating a bus 
somewhere along the line.  If you're using scsi, for instance, it's very 
easy for a long chain of drives to overwhelm a channel.  You might also 
want to consider some other RAID layouts like 1+0 or 5+0 depending upon 
your space vs. reliability needs.


--
Brendan Conoboy / Red Hat, Inc. / [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: limits on raid

2007-06-18 Thread Brendan Conoboy

[EMAIL PROTECTED] wrote:

yes, sorry, ultra 320 wide.


Exactly how many channels and drives?

--
Brendan Conoboy / Red Hat, Inc. / [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: replace disk in raid5 without linux noticing?

2006-04-19 Thread Brendan Conoboy

Ming Zhang wrote:

Why can't you just mark that drive as failed, remove it and hotadd a
new drive to replace the failed drive?


because background rebuild is slower than disk to disk copy, since his
disk is still fully functional.


Wouldn't it be great if every disk in a RAID volume were in its own way 
a degraded RAID1 device without a mirror?  Then when any drive started 
generating recoverable errors and warnings a mirror could be allocated 
without any downtime.  You can certainly generate a layout like this 
manually, but it would be nice to have that sort of feature out of the 
box (and without the performance hit!).  This would help a great deal in 
a situation such as Dexter's.


-Brendan ([EMAIL PROTECTED])
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html