Re: limits on raid

2007-06-22 Thread david
On Fri, 22 Jun 2007, David Greaves wrote: That's not a bad thing - until you look at the complexity it brings - and then consider the impact and exceptions when you do, eg hardware acceleration? md information fed up to the fs layer for xfs? simple long term maintenance? Often these

Re: limits on raid

2007-06-22 Thread David Greaves
Bill Davidsen wrote: David Greaves wrote: [EMAIL PROTECTED] wrote: On Fri, 22 Jun 2007, David Greaves wrote: If you end up 'fiddling' in md because someone specified --assume-clean on a raid5 [in this case just to save a few minutes *testing time* on system with a heavily choked bus!] then

Re: limits on raid

2007-06-22 Thread david
On Fri, 22 Jun 2007, Bill Davidsen wrote: By delaying parity computation until the first write to a stripe only the growth of a filesystem is slowed, and all data are protected without waiting for the lengthly check. The rebuild speed can be set very low, because on-demand rebuild will do

Re: limits on raid

2007-06-21 Thread David Greaves
Neil Brown wrote: This isn't quite right. Thanks :) Firstly, it is mdadm which decided to make one drive a 'spare' for raid5, not the kernel. Secondly, it only applies to raid5, not raid6 or raid1 or raid10. For raid6, the initial resync (just like the resync after an unclean shutdown)

Re: limits on raid

2007-06-21 Thread Mark Lord
[EMAIL PROTECTED] wrote: On Thu, 21 Jun 2007, David Chinner wrote: On Thu, Jun 21, 2007 at 12:56:44PM +1000, Neil Brown wrote: I have that - apparently naive - idea that drives use strong checksum, and will never return bad data, only good data or an error. If this isn't right, then it

Re: limits on raid

2007-06-21 Thread Nix
On 21 Jun 2007, Neil Brown stated: I have that - apparently naive - idea that drives use strong checksum, and will never return bad data, only good data or an error. If this isn't right, then it would really help to understand what the cause of other failures are before working out how to

Re: limits on raid

2007-06-21 Thread Bill Davidsen
I didn't get a comment on my suggestion for a quick and dirty fix for -assume-clean issues... Bill Davidsen wrote: Neil Brown wrote: On Thursday June 14, [EMAIL PROTECTED] wrote: it's now churning away 'rebuilding' the brand new array. a few questions/thoughts. why does it need to do a

Re: limits on raid

2007-06-19 Thread Phillip Susi
[EMAIL PROTECTED] wrote: one channel, 2 OS drives plus the 45 drives in the array. Huh? You can only have 16 devices on a scsi bus, counting the host adapter. And I don't think you can even manage that much reliably with the newer higher speed versions, at least not without some very

Re: limits on raid

2007-06-19 Thread Lennart Sorensen
On Mon, Jun 18, 2007 at 02:56:10PM -0700, [EMAIL PROTECTED] wrote: yes, I'm useing promise drive shelves, I have them configured to export the 15 drives as 15 LUNs on a single ID. I'm going to be useing this as a huge circular buffer that will just be overwritten eventually 99% of the

Re: limits on raid

2007-06-19 Thread david
On Tue, 19 Jun 2007, Lennart Sorensen wrote: On Mon, Jun 18, 2007 at 02:56:10PM -0700, [EMAIL PROTECTED] wrote: yes, I'm useing promise drive shelves, I have them configured to export the 15 drives as 15 LUNs on a single ID. I'm going to be useing this as a huge circular buffer that will just

Re: limits on raid

2007-06-18 Thread Brendan Conoboy
[EMAIL PROTECTED] wrote: in my case it takes 2+ days to resync the array before I can do any performance testing with it. for some reason it's only doing the rebuild at ~5M/sec (even though I've increased the min and max rebuild speeds and a dd to the array seems to be ~44M/sec, even during

Re: limits on raid

2007-06-18 Thread david
On Mon, 18 Jun 2007, Brendan Conoboy wrote: [EMAIL PROTECTED] wrote: in my case it takes 2+ days to resync the array before I can do any performance testing with it. for some reason it's only doing the rebuild at ~5M/sec (even though I've increased the min and max rebuild speeds and a dd

Re: limits on raid

2007-06-18 Thread david
On Mon, 18 Jun 2007, Lennart Sorensen wrote: On Mon, Jun 18, 2007 at 10:28:38AM -0700, [EMAIL PROTECTED] wrote: I plan to test the different configurations. however, if I was saturating the bus with the reconstruct how can I fire off a dd if=/dev/zero of=/mnt/test and get ~45M/sec whild only

Re: limits on raid

2007-06-18 Thread david
On Mon, 18 Jun 2007, Brendan Conoboy wrote: [EMAIL PROTECTED] wrote: I plan to test the different configurations. however, if I was saturating the bus with the reconstruct how can I fire off a dd if=/dev/zero of=/mnt/test and get ~45M/sec whild only slowing the reconstruct to ~4M/sec?

Re: limits on raid

2007-06-18 Thread david
On Mon, 18 Jun 2007, Lennart Sorensen wrote: On Mon, Jun 18, 2007 at 11:12:45AM -0700, [EMAIL PROTECTED] wrote: simple ultra-wide SCSI to a single controller. Hmm, isn't ultra-wide limited to 40MB/s? Is it Ultra320 wide? That could do a lot more, and 220MB/s sounds plausable for 320 scsi.

Re: limits on raid

2007-06-18 Thread Brendan Conoboy
[EMAIL PROTECTED] wrote: yes, sorry, ultra 320 wide. Exactly how many channels and drives? -- Brendan Conoboy / Red Hat, Inc. / [EMAIL PROTECTED] - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at

Re: limits on raid

2007-06-18 Thread david
On Mon, 18 Jun 2007, Brendan Conoboy wrote: [EMAIL PROTECTED] wrote: yes, sorry, ultra 320 wide. Exactly how many channels and drives? one channel, 2 OS drives plus the 45 drives in the array. yes I realize that there will be bottlenecks with this, the large capacity is to handle longer

Re: limits on raid

2007-06-17 Thread Andi Kleen
Neil Brown [EMAIL PROTECTED] writes: Having the filesystem duplicate data, store checksums, and be able to find a different copy if the first one it chose was bad is very sensible and cannot be done by just putting the filesystem on RAID. Apropos checksums: since RAID5 copies/xors anyways it

Re: limits on raid

2007-06-17 Thread Wakko Warner
dean gaudet wrote: On Sat, 16 Jun 2007, Wakko Warner wrote: When I've had an unclean shutdown on one of my systems (10x 50gb raid5) it's always slowed the system down when booting up. Quite significantly I must say. I wait until I can login and change the rebuild max speed to slow it

Re: limits on raid

2007-06-17 Thread Bill Davidsen
Neil Brown wrote: On Thursday June 14, [EMAIL PROTECTED] wrote: On Fri, 15 Jun 2007, Neil Brown wrote: On Thursday June 14, [EMAIL PROTECTED] wrote: what is the limit for the number of devices that can be in a single array? I'm trying to build a 45x750G array and want to

Re: limits on raid

2007-06-17 Thread Wakko Warner
dean gaudet wrote: On Sun, 17 Jun 2007, Wakko Warner wrote: i use an external write-intent bitmap on a raid1 to avoid this... you could use internal bitmap but that slows down i/o too much for my tastes. i also use an external xfs journal for the same reason. 2 disk raid1 for

Re: limits on raid

2007-06-17 Thread David Chinner
On Sat, Jun 16, 2007 at 07:59:29AM +1000, Neil Brown wrote: Combining these thoughts, it would make a lot of sense for the filesystem to be able to say to the block device That blocks looks wrong - can you find me another copy to try?. That is an example of the sort of closer integration

Re: limits on raid

2007-06-16 Thread david
On Sat, 16 Jun 2007, Neil Brown wrote: It would be possible to have a 'this is not initialised' flag on the array, and if that is not set, always do a reconstruct-write rather than a read-modify-write. But the first time you have an unclean shutdown you are going to resync all the parity

Re: limits on raid

2007-06-15 Thread Neil Brown
On Friday June 15, [EMAIL PROTECTED] wrote: As I understand the way raid works, when you write a block to the array, it will have to read all the other blocks in the stripe and recalculate the parity and write it out. Your understanding is