Re: How many drives are bad?

2008-02-19 Thread Justin Piszcz
How many drives actually failed? Failed Devices : 1 On Tue, 19 Feb 2008, Norman Elton wrote: So I had my first failure today, when I got a report that one drive (/dev/sdam) failed. I've attached the output of mdadm --detail. It appears that two drives are listed as removed, but the array is

How many drives are bad?

2008-02-19 Thread Norman Elton
So I had my first failure today, when I got a report that one drive (/dev/sdam) failed. I've attached the output of mdadm --detail. It appears that two drives are listed as removed, but the array is still functioning. What does this mean? How many drives actually failed? This is all a test

Re: How many drives are bad?

2008-02-19 Thread Justin Piszcz
Neil, Is this a bug? Also, I have a question for Norman-- how come your drives are sda[a-z]1? Typically it is /dev/sda1 /dev/sdb1 etc? Justin. On Tue, 19 Feb 2008, Norman Elton wrote: But why do two show up as removed?? I would expect /dev/sdal1 to show up someplace, either active or

Re: How many drives are bad?

2008-02-19 Thread Norman Elton
But why do two show up as removed?? I would expect /dev/sdal1 to show up someplace, either active or failed. Any ideas? Thanks, Norman On Feb 19, 2008, at 12:31 PM, Justin Piszcz wrote: How many drives actually failed? Failed Devices : 1 On Tue, 19 Feb 2008, Norman Elton wrote: So

Re: How many drives are bad?

2008-02-19 Thread Norman Elton
Justin, This is a Sun X4500 (Thumper) box, so it's got 48 drives inside. /dev/sd[a-z] are all there as well, just in other RAID sets. Once you get to /dev/sdz, it starts up at /dev/sdaa, sdab, etc. I'd be curious if what I'm experiencing is a bug. What should I try to restore the array? Norman

Re: How many drives are bad?

2008-02-19 Thread Justin Piszcz
Norman, I am extremely interested in what distribution you are running on it and what type of SW raid you are employing (besides the one you showed here), are all 48 drives filled, or? Justin. On Tue, 19 Feb 2008, Norman Elton wrote: Justin, This is a Sun X4500 (Thumper) box, so it's got

Re: How many drives are bad?

2008-02-19 Thread Norman Elton
Justin, There was actually a discussion I fired off a few weeks ago about how to best run SW RAID on this hardware. Here's the recap: We're running RHEL, so no access to ZFS/XFS. I really wish we could do ZFS, but no luck. The box presents 48 drives, split across 6 SATA controllers. So disks

RE: How many drives are bad?

2008-02-19 Thread Steve Fairbairn
The box presents 48 drives, split across 6 SATA controllers. So disks sda-sdh are on one controller, etc. In our configuration, I run a RAID5 MD array for each controller, then run LVM on top of these to form one large VolGroup. I might be missing something here, and I realise you'd

LVM performance (was: Re: RAID5 to RAID6 reshape?)

2008-02-19 Thread Oliver Martin
Janek Kozicki schrieb: hold on. This might be related to raid chunk positioning with respect to LVM chunk positioning. If they interfere there indeed may be some performance drop. Best to make sure that those chunks are aligned together. Interesting. I'm seeing a 20% performance drop too, with

Re: LVM performance (was: Re: RAID5 to RAID6 reshape?)

2008-02-19 Thread Jon Nelson
On Feb 19, 2008 1:41 PM, Oliver Martin [EMAIL PROTECTED] wrote: Janek Kozicki schrieb: hold on. This might be related to raid chunk positioning with respect to LVM chunk positioning. If they interfere there indeed may be some performance drop. Best to make sure that those chunks are aligned

Re: LVM performance (was: Re: RAID5 to RAID6 reshape?)

2008-02-19 Thread Iustin Pop
On Tue, Feb 19, 2008 at 01:52:21PM -0600, Jon Nelson wrote: On Feb 19, 2008 1:41 PM, Oliver Martin [EMAIL PROTECTED] wrote: Janek Kozicki schrieb: $ hdparm -t /dev/md0 /dev/md0: Timing buffered disk reads: 148 MB in 3.01 seconds = 49.13 MB/sec $ hdparm -t /dev/dm-0

Re: LVM performance

2008-02-19 Thread Peter Rabbitson
Oliver Martin wrote: Interesting. I'm seeing a 20% performance drop too, with default RAID and LVM chunk sizes of 64K and 4M, respectively. Since 64K divides 4M evenly, I'd think there shouldn't be such a big performance penalty. I am no expert, but as far as I have read you must not only

RE: How many drives are bad?

2008-02-19 Thread Guy Watkins
} -Original Message- } From: [EMAIL PROTECTED] [mailto:linux-raid- } [EMAIL PROTECTED] On Behalf Of Steve Fairbairn } Sent: Tuesday, February 19, 2008 2:45 PM } To: 'Norman Elton' } Cc: linux-raid@vger.kernel.org } Subject: RE: How many drives are bad? } } } } The box presents 48

Re: Linux Software RAID 5 + XFS Multi-Benchmarks / 10 Raptors Again

2008-02-19 Thread Peter Grandi
What sort of tools are you using to get these benchmarks, and can I used them for ext3? The only simple tools that I found that gives semi-reasonable numbers avoiding most of the many pitfalls of storage speed testing (almost all storage benchmarks I see are largely meaningless) are recent

Re: RAID5 to RAID6 reshape?

2008-02-19 Thread Alexander Kühn
- Message from [EMAIL PROTECTED] - Date: Mon, 18 Feb 2008 19:05:02 + From: Peter Grandi [EMAIL PROTECTED] Reply-To: Peter Grandi [EMAIL PROTECTED] Subject: Re: RAID5 to RAID6 reshape? To: Linux RAID linux-raid@vger.kernel.org On Sun, 17 Feb 2008 07:45:26 -0700,