Jeff Breidenbach wrote:
I'm planning to take some RAID-1 drives out of an old machine
and plop them into a new machine. Hoping that mdadm assemble
will magically work. There's no reason it shouldn't work. Right?
old [ mdadm v1.9.0 / kernel 2.6.17 / Debian Etch / x86-64 ]
new [ mdad v2.6.2
It's not a RAID issue, but make sure you don't have any duplicate volume
names. According to Murphy's Law, if there are two / volumes, the wrong
one will be chosen upon your next reboot.
Thanks for the tip. Since I'm not using volumes or LVM at all, I should be
safe from this particular
Jeff Breidenbach wrote:
Does the new machine have a RAID array already?
Yes.. the new machine already has on RAID array.
After sneakernet it should have two RAID arrays. Is
there a gotcha?
It's not a RAID issue, but make sure you don't have any duplicate volume
names. According to Murphy's
Does the new machine have a RAID array already?
Yes.. the new machine already has on RAID array.
After sneakernet it should have two RAID arrays. Is
there a gotcha?
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo
I just discovered (the hard way, sigh, but not too much data loss) that a
4-drive RAID 10 array had the mirroring set up incorrectly.
Given 4 drvies A, B, C and D, I had intended to mirror A-C and B-D,
so that I could split the mirror and run on either (A,B) or (C,D).
However, it turns out that
This patch changes the disk to be read for layout far 1 to always be
the disk with the lowest block address.
Thus the chunks to be read will always be (for a fully functioning array)
from the first band of stripes, and the raid will then work as a raid0
consisting of the first band of stripes.