Richard Scobie wrote:
Josh Litherland wrote:
On Sun, 2006-09-03 at 15:56 +1200, Richard Scobie wrote:
I am building 2.6.18rc5-mm1 and I cannot find the entry under make
config, to enable the various RAID options.
Under Device Drivers, switch on Multi-device support.
Thanks. I must be
For 2.6.18-rc5-mm1, just edit drivers/md/Kconfig and change the 'if
CONFIG_BLOCK' to 'if BLOCK' and you'll be all set. You can then run
'make oldconfig' and you should be all set.
It's a simple change, and easier than hunting down and applying a
patch at this point. :]
--
VGER BF report: U
On Sun, 3 Sep 2006, Clive Messer wrote:
This leads me to a question. I understand from reading the linux-raid
archives that the current behaviour when rebuilding with a single
badblock on another disk is for that disk to also be kicked from the
array. For some time I have considered buying a
This way I could get the replacement in and do the resync without
actually having to degrade the array first.
snip
2) This sort of brings up a subject I'm getting increasingly paranoid
about. It seems to me that if disk 1 develops a unrecoverable error at
block 500 and disk 4 develops one at
Possibly safer to recreate with two missing if you aren't sure of the
order. That way you can look in the array to see if it looks right,
or if you have to try a different order.
I'd say it's safer to recreate with all disks, in order to get the
resync. Otherwise you risk the all so famous
On 9/3/06, Tuomas Leikola [EMAIL PROTECTED] wrote:
Possibly safer to recreate with two missing if you aren't sure of the
order. That way you can look in the array to see if it looks right,
or if you have to try a different order.
I'd say it's safer to recreate with all disks, in order to
Tuomas Leikola wrote:
[]
Here's an alternate description. On first 'unrecoverable' error, the
disk is marked as FAILING, which means that a spare is immediately
taken into use to replace the failing one. The disk is not kicked, and
readable blocks can still be used to rebuild other blocks
Array is online, degraded for the moment but I can access the file systems for
backups.
I passed -A --force to mdadm, seems that did the trick.
What puzzles me still is that I had a degraded array for the third time now
and never could tell why it happened in the first place.
This time the
On Sun, 3 Sep 2006, Clive Messer wrote:
This leads me to a question. I understand from reading the linux-raid
archives
that the current behaviour when rebuilding with a single badblock on another
disk is for that disk to also be kicked from the array.
that's not quite the current
On Sunday 03 September 2006 23:32, you wrote:
On Sun, 3 Sep 2006, Clive Messer wrote:
This leads me to a question. I understand from reading the linux-raid
archives that the current behaviour when rebuilding with a single
badblock on another disk is for that disk to also be kicked from the
On Sunday 03 September 2006 14:11, you wrote:
On Sun, 3 Sep 2006, Clive Messer wrote:
This leads me to a question. I understand from reading the linux-raid
archives that the current behaviour when rebuilding with a single
badblock on another disk is for that disk to also be kicked from the
2006/9/4, Clive Messer [EMAIL PROTECTED]:
Does anyone have a preference for cards other than 3ware if purchasing
a 'hardware' raid card? From what have read the 3ware cards are very well
regarded and the driver is open source.
OpenBSD recommends LSI MegaRaid.
Best
Martin
--
VGER BF report:
Hi,
I would like to ask for advice on the following subject:
I've recently acquired a FOXCONN 945G7MA-8EKRS2 mATX board, which comes
with an Intel ICH7R chipset, and I would like to make a RAID-0 array
with 3 disks of equal size (750GB in this case) with an XFS filesystem
(system configuration
On Sunday September 3, [EMAIL PROTECTED] wrote:
On Sun, 3 Sep 2006, Clive Messer wrote:
This leads me to a question. I understand from reading the linux-raid
archives
that the current behaviour when rebuilding with a single badblock on
another
disk is for that disk to also be
On Sunday September 3, [EMAIL PROTECTED] wrote:
Hello GABELN
I have a really really big problem. In fact, the problem is the output of
mdadm --examine as shown on http://nomorepasting.com/paste.php?pasteID=68021
Please explain why you think that output is a problem. It looks fine
to me.
On Saturday September 2, [EMAIL PROTECTED] wrote:
Attempting to build a new raid5 md array across 4 hard drives. At the
exact moment that the drive finishes rebuilding, the superblock checksum
changes to an invalid value. During the rebuild, mdadm -E for the 4
drives shows:
On Thursday August 31, [EMAIL PROTECTED] wrote:
Hi all,
Just wondering if there is any way to get mdadm created multipath devices
to re-activate a previously disabled path?
I know I can
mdadm /dev/md0 -f /dev/sdx -r /dev/sdx -a /dev/sdx
to re-activate it, but I want mdadm to do it
On Monday August 28, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
On Saturday August 26, [EMAIL PROTECTED] wrote:
All,
[...]
* Problem 1: Since moving from 2.4 - 2.6 kernel, a reboot kicks one
device out of the array (c.f. post by Andreas Pelzner on 24th Aug 2006).
* Problem 2:
Neil Brown [EMAIL PROTECTED] wrote:
On Sunday September 3, [EMAIL PROTECTED] wrote:
I have a really really big problem. In fact, the problem is the output of
mdadm --examine as shown on http://nomorepasting.com/paste.php?pasteID=68021
Please explain why you think that output is a problem.
19 matches
Mail list logo