Re: raid10 on centos 5

2007-05-04 Thread Eli Stair
You shouldn't need to build a new kernel, just extract the SRPM for the initial install (CentOS 5, no updated kernels), use the config for the appropriate kernel (SMP, UP, i386/x86_64), enable the raid10 module and do a 'make modules'. You may need to do a minor amount of tweaking in the

Re: RAID5 refuses to accept replacement drive.

2006-10-25 Thread Eli Stair
A tangentially-related suggestion: If you layer dm-multipath on top of the raw block (SCSI,FC) layer, you add some complexity but also the good quality of enabling periodic readsector0() checks... so if your spindle powers down unexpectedly but the controller thinks it's still alive, you

Re: BUGS: internal bitmap during array create

2006-10-19 Thread Eli Stair
Neil, thanks a ton. This issue appears resolved completely, array creation/assembly is clean, filesystem creation and consistency checking is clean, and I have not generated any filesystem or md errors in testing yet. Cheers, /eli Neil Brown wrote: On Wednesday October 18, [EMAIL

Re: [PATCH] md: Fix bug where new drives added to an md array sometimes don't sync properly.

2006-10-18 Thread Eli Stair
FYI, I'm testing 2.6.18.1 and noticed this mis-numbering of RAID10 members issue is still extant. Even with this fix applied to raid10.c, I am still seeing repeatable issues with devices assuming a Number greater than that which they had when removed from a running array. Issue 1) I'm

Re: BUGS: internal bitmap during array create

2006-10-12 Thread Eli Stair
xfs_check: data size check failed Thanks! /eli Eli Stair wrote: After realizing my stupid error in specifying the bitmap during array creation, I've triggered a couple of 100% repeatable bugs with this scenario. BUG 1) When I create an array without a bitmap and add it after the array

BUGS: internal bitmap during array create

2006-10-11 Thread Eli Stair
After realizing my stupid error in specifying the bitmap during array creation, I've triggered a couple of 100% repeatable bugs with this scenario. BUG 1) When I create an array without a bitmap and add it after the array is synced, all works fine with any filesystem. If I create WITH

Setting write-intent bitmap during array resync/create?

2006-10-10 Thread Eli Stair
I gather this isn't currently possible, but I wonder if it's feasible to make it so? This works fine once the array is marked 'clean', and I imagine it's simpler to just disallow the bitmap creation until it's in that state. Would it be possible to allow creation of the bitmap by queueing

Re: [PATCH] md: Fix bug where new drives added to an md array sometimes don't sync properly.

2006-10-10 Thread Eli Stair
active sync /dev/dm-10 11 253 11 11 active sync /dev/dm-11 12 253 12 12 active sync /dev/dm-12 13 253 13 13 active sync /dev/dm-13 14 2538- spare /dev/dm-8 Eli Stair

Re: [PATCH] md: Fix bug where new drives added to an md array sometimes don't sync properly.

2006-10-10 Thread Eli Stair
Thanks Neil, I just gave this patched module a shot on four systems. So far, I haven't seen the device number inappropriately increment, though as per a mail I sent a short while ago that seemed remedied by using the 1.2 superblock, for some reason. However, it appears to have introduced

Re: [PATCH] md: Fix bug where new drives added to an md array sometimes don't sync properly.

2006-10-10 Thread Eli Stair
and adding it back in, depending on the function that is called. Eli Stair wrote: Thanks Neil, I just gave this patched module a shot on four systems. So far, I haven't seen the device number inappropriately increment, though as per a mail I sent a short while ago that seemed remedied

Re: [PATCH] md: Fix bug where new drives added to an md array sometimes don't sync properly.

2006-10-06 Thread Eli Stair
things got all out of whack, in addition to just not working properly :) Now I've just got to figure out how to get the re-introduced drive to participate in the array again like it should. Eli Stair wrote: I'm actually seeing similar behaviour on RAID10 (2.6.18), where after removing a drive

Re: RAID10: near, far, offset -- which one?

2006-10-05 Thread Eli Stair
Taken for what it is, here's some recent experience I'm seeing (not a precise explanation as you're asking for, which I'd like to know also). Layout : near=2, far=1 Chunk Size : 512K gtmp01,16G,,,125798,22,86157,17,,,337603,34,765.3,2,16,240,1,+,+++,237,1,241,1,+,+++,239,1

Re: [PATCH] md: Fix bug where new drives added to an md array sometimes don't sync properly.

2006-10-05 Thread Eli Stair
I'm actually seeing similar behaviour on RAID10 (2.6.18), where after removing a drive from an array re-adding it sometimes results in it still being listed as a faulty-spare and not being taken for resync. In the same scenario, after swapping drives, doing a fail,remove, then an 'add'