Re: transferring RAID-1 drives via sneakernet

2008-02-12 Thread David Greaves
Jeff Breidenbach wrote:
 I'm planning to take some RAID-1 drives out of an old machine
 and plop them into a new machine. Hoping that mdadm assemble
 will magically work. There's no reason it shouldn't work. Right?
 
 old  [ mdadm v1.9.0 / kernel 2.6.17 / Debian Etch / x86-64 ]
 new [ mdad v2.6.2 / kernel 2.6.22 / Ubuntu 7.10 server ]

I've done it several times.

Does the new machine have a RAID array already?

David

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: transferring RAID-1 drives via sneakernet

2008-02-12 Thread Jeff Breidenbach
 It's not a RAID issue, but make sure you don't have any duplicate volume
 names.  According to Murphy's Law, if there are two / volumes, the wrong
 one will be chosen upon your next reboot.

Thanks for the tip. Since I'm not using volumes or LVM at all, I should be
safe from this particular problem.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: transferring RAID-1 drives via sneakernet

2008-02-12 Thread Brendan Conoboy

Jeff Breidenbach wrote:

Does the new machine have a RAID array already?


Yes.. the new machine already has on RAID array.
After sneakernet it should have two RAID arrays. Is
there a gotcha?


It's not a RAID issue, but make sure you don't have any duplicate volume 
names.  According to Murphy's Law, if there are two / volumes, the wrong 
one will be chosen upon your next reboot.


--
Brendan Conoboy / Red Hat, Inc. / [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: transferring RAID-1 drives via sneakernet

2008-02-12 Thread Jeff Breidenbach
 Does the new machine have a RAID array already?

Yes.. the new machine already has on RAID array.
After sneakernet it should have two RAID arrays. Is
there a gotcha?
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Got raid10 assembled wrong - how to fix?

2008-02-12 Thread George Spelvin
I just discovered (the hard way, sigh, but not too much data loss) that a
4-drive RAID 10 array had the mirroring set up incorrectly.

Given 4 drvies A, B, C and D, I had intended to mirror A-C and B-D,
so that I could split the mirror and run on either (A,B) or (C,D).

However, it turns out that the mirror pairs are A-B and C-D.  So
pulling both A and B off-line results in a non-functional array.

So basically what I need to do is to decommission B and C, and rebuild
the array with them swapped: A, C, B, D.

Can someone tell me if the following incantation is correct?

mdadm /dev/mdX -f /dev/B -r /dev/B
mdadm /dev/mdX -f /dev/C -r /dev/C
mdadm --zero-superblock /dev/B
mdadm --zero-superblock /dev/C
mdadm /dev/mdX -a /dev/C
mdadm /dev/mdX -a /dev/B

I'm assuming that fresh spares will be assigned to the lowest available
slot.  I just get nervous about commands with names like --zero-superblock
when I have data I'd rather not lose.

Thanks!
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


patch for raid10,f1 to operate like raid0

2008-02-12 Thread Keld Jørn Simonsen
This patch changes the disk to be read for layout far  1 to always be
the disk with the lowest block address.

Thus the chunks to be read will always be (for a fully functioning array)
from the first band of stripes, and the raid will then work as a raid0
consisting of the first band of stripes.

Some advantages:

The fastest part which is the outer sectors of the disks involved will be used.
The outer blocks of a disk may be as much as 100 % faster than the inner blocks.

Average seek time will be smaller, as seeks will always be confined to the
first part of the disks.

Mixed disks with different performance characteristics will work better,
as they will work as raid0, the sequential read rate will be number of
disks involved times the IO rate of the slowest disk.

If a disk is malfunctioning, the first disk which is working, and has the lowest
block address for the logical block will be used.

Signed-off-by: Keld Simonsen [EMAIL PROTECTED]

--- raid10.c2008-02-12 00:50:59.0 +0100
+++ raid10-ks.c 2008-02-12 00:51:09.0 +0100
@@ -537,7 +537,7 @@
current_distance = abs(r10_bio-devs[slot].addr -
   conf-mirrors[disk].head_position);
 
-   /* Find the disk whose head is closest */
+   /* Find the disk whose head is closest, 
+  or for far  1 the closest to partition beginning */
 
for (nslot = slot; nslot  conf-copies; nslot++) {
int ndisk = r10_bio-devs[nslot].devnum;
@@ -557,7 +557,11 @@
slot = nslot;
break;
}
-   new_distance = abs(r10_bio-devs[nslot].addr -
+
+/* for far  1 always use the lowest address */
+   if (conf-far_copies  1) 
+   new_distance = r10_bio-devs[nslot].addr;
+   else new_distance = abs(r10_bio-devs[nslot].addr -
   conf-mirrors[ndisk].head_position);
if (new_distance  current_distance) {
current_distance = new_distance;
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html