On Monday August 28, [EMAIL PROTECTED] wrote:
> This might be a dumb question, but what causes md to use a large amount of
> cpu resources when reading a large amount of data from a raid1 array?
I assume you meant raid5 there.
md/raid5 shouldn't use that much CPU when reading.
It does use more th
On Monday September 4, [EMAIL PROTECTED] wrote:
> Neil Brown <[EMAIL PROTECTED]> wrote:
>
> > On Sunday September 3, [EMAIL PROTECTED] wrote:
> >
> >> I have a really really big problem. In fact, the problem is the output of
> >> mdadm --examine as shown on
> >> http://nomorepasting.com/paste.ph
Neil Brown <[EMAIL PROTECTED]> wrote:
> On Sunday September 3, [EMAIL PROTECTED] wrote:
>
>> I have a really really big problem. In fact, the problem is the output of
>> mdadm --examine as shown on http://nomorepasting.com/paste.php?pasteID=68021
>
> Please explain why you think that output is a
On Monday August 28, [EMAIL PROTECTED] wrote:
> Neil Brown wrote:
> > On Saturday August 26, [EMAIL PROTECTED] wrote:
> >> All,
> >>
> >> [...]
> >>
> >> * Problem 1: Since moving from 2.4 -> 2.6 kernel, a reboot kicks one
> >> device out of the array (c.f. post by Andreas Pelzner on 24th Aug 2006
On Monday August 28, [EMAIL PROTECTED] wrote:
> On Tue, 29 Aug 2006 15:39:24 +1000
> NeilBrown <[EMAIL PROTECTED]> wrote:
>
> >
> > Each backing_dev needs to be able to report whether it is congested,
> > either by modulating BDI_*_congested in ->state, or by
> > defining a ->congested_fn.
> > md
On Wednesday August 30, [EMAIL PROTECTED] wrote:
> Hi all,
>
> I have an IMAP mail server where mail messages are stored on a RAID1 array.
> The access on that array (/dev/md3) has seemed slow, so I did some
> investigating.
> "iostat -x /dev/hd[bd] /dev/md3" shows this:
>
> avg-cpu: %user
On Thursday August 31, [EMAIL PROTECTED] wrote:
> Hi all,
>
> Just wondering if there is any way to get mdadm created multipath devices
> to re-activate a previously disabled path?
>
> I know I can
>
> mdadm /dev/md0 -f /dev/sdx -r /dev/sdx -a /dev/sdx
>
> to re-activate it, but I want mdadm t
On Saturday September 2, [EMAIL PROTECTED] wrote:
>
> Attempting to build a new raid5 md array across 4 hard drives. At the
> exact moment that the drive finishes rebuilding, the superblock checksum
> changes to an invalid value. During the rebuild, mdadm -E for the 4
> drives shows:
>
>
On Sunday September 3, [EMAIL PROTECTED] wrote:
> Hello GABELN
>
> I have a really really big problem. In fact, the problem is the output of
> mdadm --examine as shown on http://nomorepasting.com/paste.php?pasteID=68021
Please explain why you think that output is a problem. It looks fine
to me.
On Sunday September 3, [EMAIL PROTECTED] wrote:
> On Sun, 3 Sep 2006, Clive Messer wrote:
>
> > This leads me to a question. I understand from reading the linux-raid
> > archives
> > that the current behaviour when rebuilding with a single badblock on
> > another
> > disk is for that disk to a
Hi,
I would like to ask for advice on the following subject:
I've recently acquired a FOXCONN 945G7MA-8EKRS2 mATX board, which comes
with an Intel ICH7R chipset, and I would like to make a RAID-0 array
with 3 disks of equal size (750GB in this case) with an XFS filesystem
(system configuration is
2006/9/4, Clive Messer <[EMAIL PROTECTED]>:
Does anyone have a preference for cards other than 3ware if purchasing
a 'hardware' raid card? From what have read the 3ware cards are very well
regarded and the driver is open source.
OpenBSD recommends LSI MegaRaid.
Best
Martin
--
VGER BF report
On Sunday 03 September 2006 14:11, you wrote:
> On Sun, 3 Sep 2006, Clive Messer wrote:
> > This leads me to a question. I understand from reading the linux-raid
> > archives that the current behaviour when rebuilding with a single
> > badblock on another disk is for that disk to also be kicked fro
On Sunday 03 September 2006 23:32, you wrote:
> On Sun, 3 Sep 2006, Clive Messer wrote:
> > This leads me to a question. I understand from reading the linux-raid
> > archives that the current behaviour when rebuilding with a single
> > badblock on another disk is for that disk to also be kicked fro
On Sun, 3 Sep 2006, Clive Messer wrote:
> This leads me to a question. I understand from reading the linux-raid
> archives
> that the current behaviour when rebuilding with a single badblock on another
> disk is for that disk to also be kicked from the array.
that's not quite the current behav
Array is online, degraded for the moment but I can access the file systems for
backups.
I passed -A --force to mdadm, seems that did the trick.
What puzzles me still is that I had a degraded array for the third time now
and never could tell why it happened in the first place.
This time the mac
Tuomas Leikola wrote:
[]
> Here's an alternate description. On first 'unrecoverable' error, the
> disk is marked as FAILING, which means that a spare is immediately
> taken into use to replace the failing one. The disk is not kicked, and
> readable blocks can still be used to rebuild other blocks (
On 9/3/06, Tuomas Leikola <[EMAIL PROTECTED]> wrote:
> Possibly safer to recreate with two missing if you aren't sure of the
> order. That way you can look in the array to see if it looks right,
> or if you have to try a different order.
I'd say it's safer to recreate with all disks, in order t
Possibly safer to recreate with two missing if you aren't sure of the
order. That way you can look in the array to see if it looks right,
or if you have to try a different order.
I'd say it's safer to recreate with all disks, in order to get the
resync. Otherwise you risk the all so famous sile
This way I could get the replacement in and do the resync without
actually having to degrade the array first.
2) This sort of brings up a subject I'm getting increasingly paranoid
about. It seems to me that if disk 1 develops a unrecoverable error at
block 500 and disk 4 develops one at 55,00
On Sun, 3 Sep 2006, Clive Messer wrote:
This leads me to a question. I understand from reading the linux-raid
archives that the current behaviour when rebuilding with a single
badblock on another disk is for that disk to also be kicked from the
array. For some time I have considered buying a 3
For 2.6.18-rc5-mm1, just edit drivers/md/Kconfig and change the 'if
CONFIG_BLOCK' to 'if BLOCK' and you'll be all set. You can then run
'make oldconfig' and you should be all set.
It's a simple change, and easier than hunting down and applying a
patch at this point. :]
--
VGER BF report: U 0.
Richard Scobie wrote:
> Josh Litherland wrote:
>> On Sun, 2006-09-03 at 15:56 +1200, Richard Scobie wrote:
>>
>>> I am building 2.6.18rc5-mm1 and I cannot find the entry under "make
>>> config", to enable the various RAID options.
>>
>>
>> Under "Device Drivers", switch on "Multi-device support".
Hi,
I lost a disk from a 4 disk RAID-5 array; usually not a problem, investigate
and add it back in. Normally nothing wrong with the disk, possibly a ICH6
libata bug when heavy IO is taking place. Yesterday when adding it back to
the array, I run into a problem with 3 badblocks on one of the o
24 matches
Mail list logo