"J. David Beutel" <[EMAIL PROTECTED]> writes:
> Neil Brown wrote:
>> 2.6.12 does support reducing the number of drives in a raid1, but it
>> will only remove drives from the end of the list. e.g. if the
>> state was
>>
>> 58604992 blocks [3/2] [UU_]
>>
>> then it would work. But as
Neil Brown wrote:
2.6.12 does support reducing the number of drives in a raid1, but it
will only remove drives from the end of the list. e.g. if the
state was
58604992 blocks [3/2] [UU_]
then it would work. But as it is
58604992 blocks [3/2] [_UU]
it won't. You could fai
On Monday September 10, [EMAIL PROTECTED] wrote:
> On Sun, Sep 09, 2007 at 09:31:54PM -1000, J. David Beutel wrote:
> > [EMAIL PROTECTED] ~]# mdadm --grow /dev/md5 -n2
> > mdadm: Cannot set device size/shape for /dev/md5: Device or resource busy
> >
> > mdadm - v1.6.0 - 4 June 2004
> > Linux 2.6.12
Iustin Pop wrote:
On Sun, Sep 09, 2007 at 09:31:54PM -1000, J. David Beutel wrote:
[EMAIL PROTECTED] ~]# mdadm --grow /dev/md5 -n2
mdadm: Cannot set device size/shape for /dev/md5: Device or resource busy
mdadm - v1.6.0 - 4 June 2004
Linux 2.6.12-1.1381_FC3 #1 Fri Oct 21 03:46:55 EDT 2005 i6
On Sun, Sep 09, 2007 at 09:31:54PM -1000, J. David Beutel wrote:
> [EMAIL PROTECTED] ~]# mdadm --grow /dev/md5 -n2
> mdadm: Cannot set device size/shape for /dev/md5: Device or resource busy
>
> mdadm - v1.6.0 - 4 June 2004
> Linux 2.6.12-1.1381_FC3 #1 Fri Oct 21 03:46:55 EDT 2005 i686 athlon i386
Richard Scobie wrote:
Have a look at the "Grow Mode" section of the mdadm man page.
Thanks! I overlooked that, although I did look at the man page before
posting.
It looks as though you should just need to use the same command you
used to grow it to 3 drives, except specify only 2 this tim
J. David Beutel wrote:
My /dev/hdd started failing its SMART check, so I removed it from a RAID1:
# mdadm /dev/md5 -f /dev/hdd2 -r /dev/hdd2
Now when I boot it looks like this in /proc/mdstat:
md5 : active raid1 hdc8[2] hdg8[1]
58604992 blocks [3/2] [_UU]
and I get a "DegradedArray event
My /dev/hdd started failing its SMART check, so I removed it from a RAID1:
# mdadm /dev/md5 -f /dev/hdd2 -r /dev/hdd2
Now when I boot it looks like this in /proc/mdstat:
md5 : active raid1 hdc8[2] hdg8[1]
58604992 blocks [3/2] [_UU]
and I get a "DegradedArray event on /dev/md5" email on e