I just had a disk die in a 2.6.16 (debian kernel) raid1 server, and it's
triggered an oops in raid1.
There are a bunch of 2-partition mirrors:
Personalities : [raid1]
md5 : active raid1 hdc7[2](F) hda7[1]
77625984 blocks [2/1] [_U]
md4 : active raid1 hdc6[2](F) hda6[1]
16000640
[EMAIL PROTECTED] said:
Wondering if anyone can comment on an easy way to get grub to update
all components in a raid1 array. I have a raid1 /boot with a raid10
/root and have previously used lilo with the raid-extra-boot option to
install to boot sectors of all component devices. With grub
[EMAIL PROTECTED] said:
Maybe I could test if /dev was a mount point?
Any other ideas?
there's a udevd you can check for. I don't know whether that's a better
test or not.
Jason
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
I have a 4-disk raid5 (sda3, sdb3, hda1, hdc1). sda and sdb share a
silicon image sata card. sdb died completely, then 20 minutes later,
the sata_sil driver became fatally confused and the machine locked up.
I shut down the machine and waited until I had a replacement for sdb.
I've got a
[EMAIL PROTECTED] said:
How do I get this array going again? Am I doing something wrong?
Reading the list archives indicates that there could be bugs in this
area, or that I may need to recreate the array with -C (though that
seems heavyhanded to me).
This is what I ended up doing. I made
On Tue, Jan 31, 2006 at 09:19:16PM +0100, Molle Bestefich wrote:
I *think* that the raid developers may be, for once, choosing words
not-so-wisely when talking about deprecating autoassembly.
You're right, I should be careful not to imply anything Neil didn't
actually say. What he said was: