- Message from [EMAIL PROTECTED] -
Date: Mon, 18 Feb 2008 19:05:02 +
From: Peter Grandi <[EMAIL PROTECTED]>
Reply-To: Peter Grandi <[EMAIL PROTECTED]>
Subject: Re: RAID5 to RAID6 reshape?
To: Linux RAID
On Sun, 17 Feb 2008 07:45:26 -0700, "Conway S. Smith"
<[EMAI
>> What sort of tools are you using to get these benchmarks, and can I
>> used them for ext3?
The only simple tools that I found that gives semi-reasonable
numbers avoiding most of the many pitfalls of storage speed
testing (almost all storage benchmarks I see are largely
meaningless) are recent v
} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED] On Behalf Of Steve Fairbairn
} Sent: Tuesday, February 19, 2008 2:45 PM
} To: 'Norman Elton'
} Cc: linux-raid@vger.kernel.org
} Subject: RE: How many drives are bad?
}
}
} >
} > The box presents 48 drives
} -Original Message-
} From: [EMAIL PROTECTED] [mailto:linux-raid-
} [EMAIL PROTECTED] On Behalf Of Steve Fairbairn
} Sent: Tuesday, February 19, 2008 2:45 PM
} To: 'Norman Elton'
} Cc: linux-raid@vger.kernel.org
} Subject: RE: How many drives are bad?
}
}
} >
} > The box presents 48 dr
Oliver Martin wrote:
Interesting. I'm seeing a 20% performance drop too, with default RAID
and LVM chunk sizes of 64K and 4M, respectively. Since 64K divides 4M
evenly, I'd think there shouldn't be such a big performance penalty.
I am no expert, but as far as I have read you must not only have
On Tue, Feb 19, 2008 at 01:52:21PM -0600, Jon Nelson wrote:
> On Feb 19, 2008 1:41 PM, Oliver Martin
> <[EMAIL PROTECTED]> wrote:
> > Janek Kozicki schrieb:
> >
> > $ hdparm -t /dev/md0
> >
> > /dev/md0:
> > Timing buffered disk reads: 148 MB in 3.01 seconds = 49.13 MB/sec
> >
> > $ hdparm -t
On Feb 19, 2008 1:41 PM, Oliver Martin
<[EMAIL PROTECTED]> wrote:
> Janek Kozicki schrieb:
> > hold on. This might be related to raid chunk positioning with respect
> > to LVM chunk positioning. If they interfere there indeed may be some
> > performance drop. Best to make sure that those chunks are
>
> The box presents 48 drives, split across 6 SATA controllers.
> So disks sda-sdh are on one controller, etc. In our
> configuration, I run a RAID5 MD array for each controller,
> then run LVM on top of these to form one large VolGroup.
>
I might be missing something here, and I realise yo
Janek Kozicki schrieb:
hold on. This might be related to raid chunk positioning with respect
to LVM chunk positioning. If they interfere there indeed may be some
performance drop. Best to make sure that those chunks are aligned together.
Interesting. I'm seeing a 20% performance drop too, with
Justin,
There was actually a discussion I fired off a few weeks ago about how
to best run SW RAID on this hardware. Here's the recap:
We're running RHEL, so no access to ZFS/XFS. I really wish we could do
ZFS, but no luck.
The box presents 48 drives, split across 6 SATA controllers. So disks
sda
Norman,
I am extremely interested in what distribution you are running on it and
what type of SW raid you are employing (besides the one you showed here),
are all 48 drives filled, or?
Justin.
On Tue, 19 Feb 2008, Norman Elton wrote:
Justin,
This is a Sun X4500 (Thumper) box, so it's got
Justin,
This is a Sun X4500 (Thumper) box, so it's got 48 drives inside.
/dev/sd[a-z] are all there as well, just in other RAID sets. Once you
get to /dev/sdz, it starts up at /dev/sdaa, sdab, etc.
I'd be curious if what I'm experiencing is a bug. What should I try to
restore the array?
Norman
Neil,
Is this a bug?
Also, I have a question for Norman-- how come your drives are sda[a-z]1?
Typically it is /dev/sda1 /dev/sdb1 etc?
Justin.
On Tue, 19 Feb 2008, Norman Elton wrote:
But why do two show up as "removed"?? I would expect /dev/sdal1 to show up
someplace, either active or fai
But why do two show up as "removed"?? I would expect /dev/sdal1 to
show up someplace, either active or failed.
Any ideas?
Thanks,
Norman
On Feb 19, 2008, at 12:31 PM, Justin Piszcz wrote:
How many drives actually failed?
Failed Devices : 1
On Tue, 19 Feb 2008, Norman Elton wrote:
S
How many drives actually failed?
Failed Devices : 1
On Tue, 19 Feb 2008, Norman Elton wrote:
So I had my first "failure" today, when I got a report that one drive
(/dev/sdam) failed. I've attached the output of "mdadm --detail". It
appears that two drives are listed as "removed", but the arr
So I had my first "failure" today, when I got a report that one drive
(/dev/sdam) failed. I've attached the output of "mdadm --detail". It
appears that two drives are listed as "removed", but the array is
still functioning. What does this mean? How many drives actually
failed?
This is all a test s
16 matches
Mail list logo