On Wednesday October 18, [EMAIL PROTECTED] wrote:
FYI, I'm testing 2.6.18.1 and noticed this mis-numbering of RAID10
members issue is still extant. Even with this fix applied to raid10.c,
I am still seeing repeatable issues with devices assuming a Number
greater than that which they had
FYI, I'm testing 2.6.18.1 and noticed this mis-numbering of RAID10
members issue is still extant. Even with this fix applied to raid10.c,
I am still seeing repeatable issues with devices assuming a Number
greater than that which they had when removed from a running array.
Issue 1)
I'm
On Thursday October 12, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
[]
Fix count of degraded drives in raid10.
Signed-off-by: Neil Brown [EMAIL PROTECTED]
--- .prev/drivers/md/raid10.c 2006-10-09 14:18:00.0 +1000
+++ ./drivers/md/raid10.c 2006-10-05
Neil Brown wrote:
[]
Fix count of degraded drives in raid10.
Signed-off-by: Neil Brown [EMAIL PROTECTED]
--- .prev/drivers/md/raid10.c 2006-10-09 14:18:00.0 +1000
+++ ./drivers/md/raid10.c 2006-10-05 20:10:07.0 +1000
@@ -2079,7 +2079,7 @@ static int run(mddev_t *mddev)
Looks like this issue isn't fully resolved after all, after spending
some time trying to get the re-added drive to sync, I've removed and
added it again. This resulted in the previous behaviour I saw, losing
its original numeric position, and becoming 14.
This now looks 100% repeatable,
Thanks Neil,
I just gave this patched module a shot on four systems. So far, I
haven't seen the device number inappropriately increment, though as per
a mail I sent a short while ago that seemed remedied by using the 1.2
superblock, for some reason. However, it appears to have introduced
In testing this some more, I've determined that (always with this
raid10.c patch, sometimes without) the kernel is not recognizing
marked-faulty drives when they're added back to the array. It appears
to be some bit that is flagged and (I assume) normally cleared when that
drive is re-added
On Friday October 6, [EMAIL PROTECTED] wrote:
This patch has resolved the immediate issue I was having on 2.6.18 with
RAID10. Previous to this change, after removing a device from the array
(with mdadm --remove), physically pulling the device and
changing/re-inserting, the Number of the
This patch has resolved the immediate issue I was having on 2.6.18 with
RAID10. Previous to this change, after removing a device from the array
(with mdadm --remove), physically pulling the device and
changing/re-inserting, the Number of the new device would be
incremented on top of the
There is a nasty bug in md in 2.6.18 affecting at least raid1.
This fixes it (and has already been sent to [EMAIL PROTECTED]).
### Comments for Changeset
This fixes a bug introduced in 2.6.18.
If a drive is added to a raid1 using older tools (mdadm-1.x or
raidtools) then it will be included in
I'm actually seeing similar behaviour on RAID10 (2.6.18), where after
removing a drive from an array re-adding it sometimes results in it
still being listed as a faulty-spare and not being taken for resync.
In the same scenario, after swapping drives, doing a fail,remove, then
an 'add'
11 matches
Mail list logo