basti <black.flederm...@arcor.de> writes:

If I do this it works:

# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/loop0 /dev/loop1

Creates a two disk RAID1 array.

Adding a third disk:

# mdadm --grow /dev/md0 --level=1 --raid-devices=3 --add /dev/loop2

And:

# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue Apr 14 14:27:00 2015
     Raid Level : raid1
     Array Size : 1047552 (1023.17 MiB 1072.69 MB)
  Used Dev Size : 1047552 (1023.17 MiB 1072.69 MB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Tue Apr 14 14:28:39 2015
          State : clean 
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

           Name : gaheris:0  (local to host gaheris)
           UUID : f330ab72:fe728ef6:23d3ab0f:5a6c67ff
         Events : 40

    Number   Major   Minor   RaidDevice State
       0       7        0        0      active sync   /dev/loop0
       1       7        1        1      active sync   /dev/loop1
       2       7        2        2      active sync   /dev/loop2


Removing it:

# mdadm /dev/md0 --fail /dev/loop2 --remove /dev/loop2
mdadm: set /dev/loop2 faulty in /dev/md0
mdadm: hot removed /dev/loop2 from /dev/md0

And set the array back to 2:

# mdadm --grow /dev/md0 --raid-devices=2
raid_disks for /dev/md0 set to 2



> thanks but
>
>
> # mdadm --add /dev/md1 --raid-devices=3 --spare-devices=0 /dev/sde1
> mdadm:option --raid-devices not valid in manage mode
>
> # mdadm --add /dev/md1 --spare-devices=0 /dev/sde1
> mdadm:option --spare-devices not valid in manage mode
>
> # mdadm --grow /dev/md1 --raid-devices=3 --spare-devices=0
> mdadm:option --spare-devices not valid in grow mode
>
> Am 14.04.2015 12:07, schrieb Darac Marjal:
>> On Tue, Apr 14, 2015 at 11:42:22AM +0200, basti wrote:
>>> Hello
>>> I want to add a 3rd drive to a raid 1 array (for disaster backup, the
>>> drive will be connectet once a week).
>>> I dry:
>>>
>>> mdadm --add /dev/md1 /dev/sde1
>>>
>>> when I fail the drive there is a message
>>>
>>> FailSpare event detected on md device /dev/md/1, component device /dev/sde1
>>>
>>> How can I add the 3rd drive as "real drive" and not as spare?
>> Looking at the mdadm manpage (i.e. I've not tried this myself), you
>> could try being moew explicit about things:
>>
>> mdadm --add /dev/md1 --raid-devices=3 --spare-devices=0 /dev/sde1
>>
>> (Note however that the manpage I'm reading - the one in Wheezy - seems a
>> little unsure whether the option is --raid-devices or --raid-disks).
>>
>> I think, also, you can convert /dev/sde1 from "spare" to "live" by
>> issuing:
>>
>> mdadm --grow /dev/md1 --raid-devices=3 --spare-devices=0
>>
>> One last thing to be aware of. If you DO set up your three-disk RAID1
>> and then take the "backup" drive out of the set, be aware that that will
>> mark the RAID as "degraded" and expect to get warnings to that effect.
>>
>> There doesn't appear to be a way to re-mark a device as spare so, to
>> cleanly remove it, you will need to mark it as faulty (mdadm /dev/md1 -f
>> /dev/sde1). I don't know, off-hand, if this will affect the backup,
>> though.
>>
>>> Best Regards
>>>
>>>
>>> -- 
>>> To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
>>> with a subject of "unsubscribe". Trouble? Contact 
>>> listmas...@lists.debian.org
>>> Archive: https://lists.debian.org/552ce0fe.2030...@arcor.de
>>>

-- 
"We will need a longer wall when the revolution comes."
    --- AJS, quoting an uncertain source.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/86k2xeevph....@gaheris.avalon.lan

Reply via email to